WO2024096934A1 - Identification et marquage d'unités de données vidéo pour le transport en réseau de données vidéo - Google Patents

Identification et marquage d'unités de données vidéo pour le transport en réseau de données vidéo Download PDF

Info

Publication number
WO2024096934A1
WO2024096934A1 PCT/US2023/024836 US2023024836W WO2024096934A1 WO 2024096934 A1 WO2024096934 A1 WO 2024096934A1 US 2023024836 W US2023024836 W US 2023024836W WO 2024096934 A1 WO2024096934 A1 WO 2024096934A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
video
data
identifier
picture
Prior art date
Application number
PCT/US2023/024836
Other languages
English (en)
Inventor
Yong He
Muhammed Zeyd Coban
Prashanth Haridas Hande
Imed Bouazizi
Nikolai Konrad Leung
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/329,994 external-priority patent/US20240146994A1/en
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Publication of WO2024096934A1 publication Critical patent/WO2024096934A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format

Definitions

  • This disclosure relates to transport of encoded video data.
  • Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, video teleconferencing devices, and the like.
  • Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), ITU-T H.265 (also referred to as High Efficiency Video Coding (HEVC)), and extensions of such standards, to transmit and receive digital video information more efficiently.
  • video compression techniques such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), ITU-T H.265 (also referred to as High Efficiency Video Coding (HEVC
  • Video compression techniques perform spatial prediction and/or temporal prediction to reduce or remove redundancy inherent in video sequences.
  • a video frame or slice may be partitioned into macroblocks. Each macroblock can be further partitioned.
  • Macroblocks in an intra-coded (I) frame or slice are encoded using spatial prediction with respect to neighboring macroblocks.
  • Macroblocks in an inter-coded (P or B) frame or slice may use spatial prediction with respect to neighboring macroblocks in the same frame or slice or temporal prediction with respect to other reference frames.
  • the video data may be packetized for transmission or storage.
  • the video data may be assembled into a video file conforming to any of a variety of standards, such as the International Organization for Standardization (ISO) base media file format and extensions thereof, such as AVC.
  • ISO International Organization for Standardization
  • this disclosure describes techniques related to signaling characteristics of media data contained in a network packet in a header of the packet.
  • Such characteristics may include, for example, an identifier for the media data (e.g., for a slice of a picture and/or for the picture).
  • the identifier may indicate, or be related to, whether the media data can be discarded in certain circumstances, such as according to a random access technique used to begin reception of a bitstream including the media data. For example, if the media data depends on earlier media data of the bitstream that was not received, the packet may be discarded. Such may occur if the media data is included in a gradual decoder refresh (GDR) picture used for random access, if the media data is included in a random access skippable leading (RASL) picture, or other such instances.
  • GDR gradual decoder refresh
  • RASL random access skippable leading
  • a method of receiving video data includes receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extracting, from the packet header, a video frame identifier for the frame of video data; and processing the payload according to the video frame identifier.
  • a device for receiving video data includes a memory configured to store video data and one or more processors implemented in circuitry and configured to receive a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extract, from the packet header, a video frame identifier for the frame of video data; and process the payload according to the video frame identifier.
  • a device for receiving video data includes means for receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; means for extracting, from the packet header, a video frame identifier for the frame of video data; and means for processing the payload according to the video frame identifier.
  • a computer-readable storage medium has stored thereon instructions that, when executed, cause a processor to receive a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extract, from the packet header, a video frame identifier for the frame of video data; and process the payload according to the video frame identifier.
  • FIG. 1 is a block diagram illustrating an example system that implements techniques for streaming media data over a network.
  • FIG. 2 is a conceptual diagram illustrating an example architecture for extended reality (XR) traffic delivery.
  • XR extended reality
  • FIG. 3 is a conceptual diagram illustrating a series of gradual decoder refresh (GDR) frames of video data.
  • GDR gradual decoder refresh
  • FIG. 4 is a conceptual diagram illustrating example sets of identifying information for protocol data units (PDUs).
  • PDUs protocol data units
  • FIG. 5 is a block diagram illustrating elements of an example video file.
  • FIG. 6 is a flowchart illustrating an example method including sending a packet including media data according to the techniques of this disclosure.
  • FIG. 7 is a flowchart illustrating an example method including receiving a packet including media data according to the techniques of this disclosure.
  • this disclosure describes techniques related to sending and receiving extended reality (XR) media data, such as augmented reality (AR), mixed reality (MR), and/or virtual reality (VR).
  • XR extended reality
  • a media communication session between two network devices such as a server device and a client device or two user equipment (UE) devices, may include audio data, video data, and/or XR data.
  • UE user equipment
  • the XR communication session may correspond to an XR-based telecommunication session, a game, or the like.
  • XRM Various issues related to XR and media (XRM) services are currently under study.
  • PDU protocol data unit
  • a PDU Set may include a plurality of PDUs, and each PDU may include data for a common presentation time.
  • a PDU Set may include PDUs including data for a frame of video data and/or graphical XR data for computer-generated graphics.
  • a PDU Set may correspond to a video frame and each PDU may be a slice of the video frame or a network abstraction layer (NAL) unit.
  • NAL network abstraction layer
  • an interface between an application domain and the 5GS may be based on quality of service (QoS) Flows.
  • QoS Flow is the finest granularity of QoS differentiation in the PDU Session.
  • QFI QoS Flow identifier
  • User plane traffic with the same QFI within a PDU Session may receive the same traffic forwarding treatment.
  • Each PDU may correspond to, for example, a packet communicated via a computer-based network.
  • Real-time Transport Protocol RTP or other protocols may be used to transport the PDUs.
  • RTP typically is performed over uniform datagram protocol (UDP).
  • UDP uniform datagram protocol
  • packets may be delivered out of order and UDP does not provide a packet delivery guarantee.
  • packet processing at the network level generally does not have access to packet payload data (which may include video coding layer (VCL) data), because the payload data may be encrypted or otherwise inaccessible to network elements.
  • VCL video coding layer
  • certain video information may be included in packet headers outside of the payload, such that the packets can be processed by network devices that cannot access the payload data.
  • data may include, for example, identification information for a frame or portion of a frame of video data.
  • the identification may specifically identify the frame, such as with a picture order count (POC) value that indicates the display order of the frame.
  • POC picture order count
  • a frame number may indicate a coding order value of the frame, which may differ from the display order.
  • the identification information may also (additionally or alternatively) include data representing a coding layer, such as a temporal layer identifier, that generally corresponds to a number of possible reference frames the frame may use for prediction and whether subsequent frames can use the frame as a reference frame.
  • a coding layer such as a temporal layer identifier
  • a network device or network component of a user device may determine, for each received packet, identifier information for media data included in a payload of the packet. Accordingly, the network device or network component may determine, for example, whether all packets of the frame have been received, whether reference frames for the frame have been received, whether subsequent frames that use the frame for reference can be decoded (due to the frame having been received or not), or the like. In this manner, the network device or network component may determine whether to, for example, provide the video data in the payload of a packet to a video decoder, whether to retrieve missing reference frames, whether to discard one or more sets of video data without sending the video data to the video decoder, or the like.
  • the techniques of this disclosure may be applied to video files conforming to video data encapsulated according to any of ISO base media file format, Scalable Video Coding (SVC) file format, Advanced Video Coding (AVC) file format, Third Generation Partnership Project (3GPP) file format, and/or Multiview Video Coding (MVC) file format, or other similar video file formats.
  • SVC Scalable Video Coding
  • AVC Advanced Video Coding
  • 3GPP Third Generation Partnership Project
  • MVC Multiview Video Coding
  • FIG. 1 is a block diagram illustrating an example system 10 that implements techniques for streaming media data over a network.
  • system 10 includes content preparation device 20, server device 60, and client device 40.
  • Client device 40 and server device 60 are communicatively coupled by network 74, which may comprise the Internet.
  • content preparation device 20 and server device 60 may also be coupled by network 74 or another network, or may be directly communicatively coupled.
  • content preparation device 20 and server device 60 may comprise the same device.
  • Content preparation device 20 in the example of FIG. 1, comprises audio source 22 and video source 24.
  • Audio source 22 may comprise, for example, a microphone that produces electrical signals representative of captured audio data to be encoded by audio encoder 26.
  • audio source 22 may comprise a storage medium storing previously recorded audio data, an audio data generator such as a computerized synthesizer, or any other source of audio data.
  • Video source 24 may comprise a video camera that produces video data to be encoded by video encoder 28, a storage medium encoded with previously recorded video data, a video data generation unit such as a computer graphics source, or any other source of video data.
  • Content preparation device 20 is not necessarily communicatively coupled to server device 60 in all examples, but may store multimedia content to a separate medium that is read by server device 60.
  • Raw audio and video data may comprise analog or digital data. Analog data may be digitized before being encoded by audio encoder 26 and/or video encoder 28. Audio source 22 may obtain audio data from a speaking participant while the speaking participant is speaking, and video source 24 may simultaneously obtain video data of the speaking participant. In other examples, audio source 22 may comprise a computer- readable storage medium comprising stored audio data, and video source 24 may comprise a computer-readable storage medium comprising stored video data. In this manner, the techniques described in this disclosure may be applied to live, streaming, real-time audio and video data or to archived, pre-recorded audio and video data.
  • Audio frames that correspond to video frames are generally audio frames containing audio data that was captured (or generated) by audio source 22 contemporaneously with video data captured (or generated) by video source 24 that is contained within the video frames.
  • audio source 22 captures the audio data
  • video source 24 captures video data of the speaking participant at the same time, that is, while audio source 22 is capturing the audio data.
  • an audio frame may temporally correspond to one or more particular video frames.
  • an audio frame corresponding to a video frame generally corresponds to a situation in which audio data and video data were captured at the same time and for which an audio frame and a video frame comprise, respectively, the audio data and the video data that was captured at the same time.
  • audio encoder 26 may encode a timestamp in each encoded audio frame that represents a time at which the audio data for the encoded audio frame was recorded
  • video encoder 28 may encode a timestamp in each encoded video frame that represents a time at which the video data for an encoded video frame was recorded.
  • an audio frame corresponding to a video frame may comprise an audio frame comprising a timestamp and a video frame comprising the same timestamp.
  • Content preparation device 20 may include an internal clock from which audio encoder 26 and/or video encoder 28 may generate the timestamps, or that audio source 22 and video source 24 may use to associate audio and video data, respectively, with a timestamp.
  • audio source 22 may send data to audio encoder 26 corresponding to a time at which audio data was recorded
  • video source 24 may send data to video encoder 28 corresponding to a time at which video data was recorded
  • audio encoder 26 may encode a sequence identifier in encoded audio data to indicate a relative temporal ordering of encoded audio data but without necessarily indicating an absolute time at which the audio data was recorded
  • video encoder 28 may also use sequence identifiers to indicate a relative temporal ordering of encoded video data.
  • a sequence identifier may be mapped or otherwise correlated with a timestamp.
  • Audio encoder 26 generally produces a stream of encoded audio data
  • video encoder 28 produces a stream of encoded video data.
  • Each individual stream of data may be referred to as an elementary stream.
  • An elementary stream is a single, digitally coded (possibly compressed) component of a media presentation.
  • the coded video or audio part of the media presentation can be an elementary stream.
  • An elementary stream may be converted into a packetized elementary stream (PES) before being encapsulated within a video file.
  • PES packetized elementary stream
  • the basic unit of data of an elementary stream is a packetized elementary stream (PES) packet.
  • coded video data generally corresponds to elementary video streams.
  • audio data corresponds to one or more respective elementary streams.
  • encapsulation unit 30 of content preparation device 20 receives elementary streams comprising coded video data from video encoder 28 and elementary streams comprising coded audio data from audio encoder 26.
  • video encoder 28 and audio encoder 26 may each include packetizers for forming PES packets from encoded data.
  • video encoder 28 and audio encoder 26 may each interface with respective packetizers for forming PES packets from encoded data.
  • encapsulation unit 30 may include packetizers for forming PES packets from encoded audio and video data.
  • Video encoder 28 may encode video data of multimedia content in a variety of ways, to produce different representations of the multimedia content at various bitrates and with various characteristics, such as pixel resolutions, frame rates, conformance to various coding standards, conformance to various profiles and/or levels of profiles for various coding standards, representations having one or multiple views (e.g., for two- dimensional or three-dimensional playback), or other such characteristics.
  • a representation as used in this disclosure, may comprise one of audio data, video data, text data (e.g., for closed captions), or other such data.
  • the representation may include an elementary stream, such as an audio elementary stream or a video elementary stream.
  • Each PES packet may include a stream id that identifies the elementary stream to which the PES packet belongs.
  • Encapsulation unit 30 is responsible for assembling elementary streams into streamable media data.
  • Encapsulation unit 30 receives PES packets for elementary streams of a media presentation from audio encoder 26 and video encoder 28 and forms corresponding network abstraction layer (NAL) units from the PES packets.
  • Coded video segments may be organized into NAL units, which provide a “network-friendly” video representation addressing applications such as video telephony, storage, broadcast, or streaming.
  • NAL units can be categorized to Video Coding Layer (VCL) NAL units and non-VCL NAL units.
  • VCL units may contain the core compression engine and may include block, macroblock, and/or slice level data.
  • Other NAL units may be non-VCL NAL units.
  • a coded picture in one time instance normally presented as a primary coded picture, may be contained in an access unit, which may include one or more NAL units.
  • Non-VCL NAL units may include parameter set NAL units and SEI NAL units, among others.
  • Parameter sets may contain sequence-level header information (in sequence parameter sets (SPS)) and the infrequently changing picture-level header information (in picture parameter sets (PPS)).
  • SPS sequence parameter sets
  • PPS picture parameter sets
  • PPS and SPS infrequently changing information need not to be repeated for each sequence or picture; hence, coding efficiency may be improved.
  • the use of parameter sets may enable out-of-band transmission of the important header information, avoiding the need for redundant transmissions for error resilience.
  • parameter set NAL units may be transmitted on a different channel than other NAL units, such as SEI NAL units.
  • SEI Supplemental Enhancement Information
  • SEI messages may contain information that is not necessary for decoding the coded pictures samples from VCL NAL units, but may assist in processes related to decoding, display, error resilience, and other purposes.
  • SEI messages may be contained in non-VCL NAL units. SEI messages are the normative part of some standard specifications, and thus are not always mandatory for standard compliant decoder implementation.
  • SEI messages may be sequence level SEI messages or picture level SEI messages. Some sequence level information may be contained in SEI messages, such as scalability information SEI messages in the example of SVC and view scalability information SEI messages in MVC. These example SEI messages may convey information on, e.g., extraction of operation points and characteristics of the operation points.
  • Server device 60 includes Real-time Transport Protocol (RTP) transmitting unit 70 and network interface 72.
  • server device 60 may include a plurality of network interfaces.
  • any or all of the features of server device 60 may be implemented on other devices of a content delivery network, such as routers, bridges, proxy devices, switches, or other devices.
  • intermediate devices of a content delivery network may cache data of multimedia content 64 and include components that conform substantially to those of server device 60.
  • network interface 72 is configured to send and receive data via network 74.
  • RTP transmitting unit 70 is configured to deliver media data to client device 40 via network 74 according to RTP, which is standardized in Request for Comment (RFC) 3550 by the Internet Engineering Task Force (IETF).
  • RTP transmitting unit 70 may also implement protocols related to RTP, such as RTP Control Protocol (RTCP), Realtime Streaming Protocol (RTSP), Session Initiation Protocol (SIP), and/or Session Description Protocol (SDP).
  • RTP transmitting unit 70 may send media data via network interface 72, which may implement Uniform Datagram Protocol (UDP) and/or Internet protocol (IP).
  • UDP Uniform Datagram Protocol
  • IP Internet protocol
  • server device 60 may send media data via RTP and RTSP over UDP using network 74.
  • RTP transmitting unit 70 may receive an RTSP describe request from, e.g., client device 40.
  • the RTSP describe request may include data indicating what types of data are supported by client device 40.
  • RTP transmitting unit 70 may respond to client device 40 with data indicating media streams, such as media content 64, that can be sent to client device 40, along with a corresponding network location identifier, such as a uniform resource locator (URL) or uniform resource name (URN).
  • URL uniform resource locator
  • UPN uniform resource name
  • RTP transmitting unit 70 may then receive an RTSP setup request from client device 40.
  • the RTSP setup request may generally indicate how a media stream is to be transported.
  • the RTSP setup request may contain the network location identifier for the requested media data (e.g., media content 64) and a transport specifier, such as local ports for receiving RTP data and control data (e.g., RTCP data) on client device 40.
  • RTP transmitting unit 70 may reply to the RTSP setup request with a confirmation and data representing ports of server device 60 by which the RTP data and control data will be sent.
  • RTP transmitting unit 70 may then receive an RTSP play request, to cause the media stream to be “played,” i.e., sent to client device 40 via network 74.
  • RTP transmitting unit 70 may also receive an RTSP teardown request to end the streaming session, in response to which, RTP transmitting unit 70 may stop sending media data to client device 40 for the corresponding session.
  • RTP receiving unit 52 may initiate a media stream by initially sending an RTSP describe request to server device 60.
  • the RTSP describe request may indicate types of data supported by client device 40.
  • RTP receiving unit 52 may then receive a reply from server device 60 specifying available media streams, such as media content 64, that can be sent to client device 40, along with a corresponding network location identifier, such as a uniform resource locator (URL) or uniform resource name (URN).
  • URL uniform resource locator
  • UPN uniform resource name
  • RTP receiving unit 52 may then generate an RTSP setup request and send the RTSP setup request to server device 60.
  • the RTSP setup request may contain the network location identifier for the requested media data (e.g., media content 64) and a transport specifier, such as local ports for receiving RTP data and control data (e.g., RTCP data) on client device 40.
  • RTP receiving unit 52 may receive a confirmation from server device 60, including ports of server device 60 that server device 60 will use to send media data and control data.
  • RTP transmitting unit 70 of server device 60 may send media data (e.g., packets of media data) to client device 40 according to the media streaming session.
  • Server device 60 and client device 40 may exchange control data (e.g., RTCP data) indicating, for example, reception statistics by client device 40, such that server device 60 can perform congestion control or otherwise diagnose and address transmission faults.
  • control data e.g., RTCP data
  • Network interface 54 may receive and provide media of a selected media presentation to RTP receiving unit 52, which may in turn provide the media data to decapsulation unit 50.
  • Decapsulation unit 50 may decapsulate elements of a video file into constituent PES streams, depacketize the PES streams to retrieve encoded data, and send the encoded data to either audio decoder 46 or video decoder 48, depending on whether the encoded data is part of an audio or video stream, e.g., as indicated by PES packet headers of the stream.
  • Audio decoder 46 decodes encoded audio data and sends the decoded audio data to audio output 42
  • video decoder 48 decodes encoded video data and sends the decoded video data, which may include a plurality of views of a stream, to video output 44.
  • Video encoder 28, video decoder 48, audio encoder 26, audio decoder 46, encapsulation unit 30, RTP receiving unit 52, and decapsulation unit 50 each may be implemented as any of a variety of suitable processing circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software, hardware, firmware or any combinations thereof.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • Each of video encoder 28 and video decoder 48 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC).
  • each of audio encoder 26 and audio decoder 46 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined CODEC.
  • An apparatus including video encoder 28, video decoder 48, audio encoder 26, audio decoder 46, encapsulation unit 30, RTP receiving unit 52, and/or decapsulation unit 50 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone.
  • Client device 40, server device 60, and/or content preparation device 20 may be configured to operate in accordance with the techniques of this disclosure. For purposes of example, this disclosure describes these techniques with respect to client device 40 and server device 60. However, it should be understood that content preparation device 20 may be configured to perform these techniques, instead of (or in addition to) server device 60.
  • Encapsulation unit 30 may form NAL units comprising a header that identifies a program to which the NAL unit belongs, as well as a payload, e.g., audio data, video data, or data that describes the transport or program stream to which the NAL unit corresponds.
  • a NAL unit includes a 1-byte header and a payload of varying size.
  • a NAL unit including video data in its payload may comprise various granularity levels of video data.
  • a NAL unit may comprise a block of video data, a plurality of blocks, a slice of video data, or an entire picture of video data.
  • Encapsulation unit 30 may receive encoded video data from video encoder 28 in the form of PES packets of elementary streams. Encapsulation unit 30 may associate each elementary stream with a corresponding program. [0049] Encapsulation unit 30 may also assemble access units from a plurality of NAL units.
  • an access unit may comprise one or more NAL units for representing a frame of video data, as well as audio data corresponding to the frame when such audio data is available.
  • An access unit generally includes all NAL units for one output time instance, e.g., all audio and video data for one time instance. For example, if each view has a frame rate of 20 frames per second (fps), then each time instance may correspond to a time interval of 0.05 seconds. During this time interval, the specific frames for all views of the same access unit (the same time instance) may be rendered simultaneously.
  • an access unit may comprise a coded picture in one time instance, which may be presented as a primary coded picture.
  • an access unit may comprise all audio and video frames of a common temporal instance, e.g., all views corresponding to time X.
  • This disclosure also refers to an encoded picture of a particular view as a “view component.” That is, a view component may comprise an encoded picture (or frame) for a particular view at a particular time.
  • an access unit may be defined as comprising all view components of a common temporal instance.
  • the decoding order of access units need not necessarily be the same as the output or display order.
  • encapsulation unit 30 After encapsulation unit 30 has assembled NAL units and/or access units into a video file based on received data, encapsulation unit 30 passes the video file to output interface 32 for output.
  • encapsulation unit 30 may store the video file locally or send the video file to a remote server via output interface 32, rather than sending the video file directly to client device 40.
  • Output interface 32 may comprise, for example, a transmitter, a transceiver, a device for writing data to a computer- readable medium such as, for example, an optical drive, a magnetic media drive (e.g., floppy drive), a universal serial bus (USB) port, a network interface, or other output interface.
  • Output interface 32 outputs the video file to a computer-readable medium, such as, for example, a transmission signal, a magnetic medium, an optical medium, a memory, a flash drive, or other computer-readable medium.
  • Network interface 54 may receive a NAL unit or access unit via network 74 and provide the NAL unit or access unit to decapsulation unit 50, via RTP receiving unit 52.
  • Decapsulation unit 50 may decapsulate a elements of a video file into constituent PES streams, depacketize the PES streams to retrieve encoded data, and send the encoded data to either audio decoder 46 or video decoder 48, depending on whether the encoded data is part of an audio or video stream, e.g., as indicated by PES packet headers of the stream.
  • Audio decoder 46 decodes encoded audio data and sends the decoded audio data to audio output 42
  • video decoder 48 decodes encoded video data and sends the decoded video data, which may include a plurality of views of a stream, to video output 44.
  • FIG. 2 is a conceptual diagram illustrating an example architecture for extended reality (XR) traffic delivery.
  • FIG. 2 depicts application / service layer 100, user plane function (UPF) 102, access network (AN) 110, and client device 120.
  • Client device 120 may correspond to client device 40 of FIG. 1 and generally include similar components to those of client device 40.
  • UPF 102 and AN 110 may correspond to network devices within network 74 of FIG. 1.
  • UPF 102 includes packet detection unit 104 and packet detection rule 106
  • AN 110 includes quality of service (QoS) to AN resource mapping unit 112 and radio interface 114
  • client device 120 includes QoS rules 122, QoS to AN resource mapping unit 124, and radio interface 126.
  • QoS quality of service
  • Client device 120 may both send and receive media data, e.g., in the form of audio, video, and/or extended reality (XR) data.
  • Data sent from client device 120 to another device, such as another client device or server device 60 of FIG. 1, is sent via an uplink stream, while data received by client device 120 is received via a downlink stream.
  • UPF 102 may classify incoming data packets based on packet filter sets of the packet detection rule 106.
  • UPF 102 may convey the classification of the user plane traffic belonging to a QoS Flow through a QoS flow identifier (QFI) marking.
  • QFI QoS flow identifier
  • QoS to AN resource mapping unit 112 of AN 110 binds QoS flows to AN resources (i.e., Data Radio Bearers).
  • AN resources i.e., Data Radio Bearers.
  • client device 120 For the uplink (UL) XRM service stream, client device 120 (which may be user equipment (UE)) performs similar PDU Set identification procedure as UPF 102. Client device 120 may perform the marking for the identified PDU Set to the AN.
  • UE user equipment
  • 3GPP TR 23.700-60 reports candidate solutions to identify the PDU and PDU Set boundaries by matching RTP/SRTP header, header extension and payload.
  • New parameters such as PDU Set sequence number, PDU Set identifier, and PDU Set type were proposed to identify the PDU and PDU Set.
  • the candidate solutions also proposed to mark the PDU Set priority, dependency or importance by matching the relevant parameters in RTP header extension or payload such as video Network Abstraction Layer (NAL) type, temporal ID (TID), and layer ID (LID).
  • NAL video Network Abstraction Layer
  • TID temporal ID
  • LID layer ID
  • the most important PDU or PDU Set may assign to the QoS Flow with higher QoS requirements, and the less important PDU or PDU Set may be discarded during network congestion or when its dependent PDU or PDU Set is not completely delivered.
  • a PDU may be mapped to a video data packet.
  • An encoder or application function may be configured to recognize video data packet properties, such as packet type and dependency, and the identifier of the data packet, such as a picture order counter (POC) value, for a video packet.
  • the identifier is usually embedded within the media data packet and not exposed to the RTP header or header extension. The link between a particular video data packet and the corresponding properties may be lost during the data encapsulation, filtering, and mapping to QoS flow, especially for unordered transmission, as the packer order may be shuffled.
  • Adding fields of attributes or properties of each packet to the RTP header or header extension may increase the overhead cost, because many RTP packet may share the same properties. It may be beneficial to add video data packet identifier to RTP header or header extension as an index to the list of data packet properties.
  • PDU Sets may also have marked priority values.
  • picture type, temporal identifier (TID), or layer identifier (LID) may be used to identify a priority among PDU Sets and mark the PDU Set priority or importance.
  • TID temporal identifier
  • LID layer identifier
  • multiple priority implications may add complexity to intermediate devices to derive a final, overall priority.
  • the encoder may assign the same TID to all frames, and the importance of the frames with the same TID may also vary. It generally makes sense to mark the intra-coded picture as highest priority and the pictures not used for reference as the lowest priority. However, there are cases when a picture is intra-coded but not used for reference as well (e.g., all intra mode). A single indication may be necessary to indicate the picture priority at the codec level for the inspection.
  • PDU Sets may also have marked dependency information.
  • Picture dependency is managed by reference picture management in most video coding schemes.
  • a reference picture is stored in a decoded picture buffer (DPB) for inter-prediction until it is marked as “unused for reference” and removed from the DPB.
  • DPB decoded picture buffer
  • ITU-T H.266/Versatile Video Coding specifies two reference picture lists (RPLs), called listO and listl.
  • the pre-defined candidate RPLs are signaled in the sequence parameter set (SPS).
  • SPS sequence parameter set
  • the index referencing a predefined candidate RPL is signaled either in the picture header (PH), if all slices of the picture have the same RPLs, or in Slice header (SH).
  • a new RPL can also be directly signaled in the picture header (PH) and slice header (SH).
  • a reference picture may be a short-term reference picture, a long-term reference picture, or an inter-layer reference picture, and the reference picture is marked using picture order count (POC) and the layer ID if it is used for inter-layer prediction.
  • the reference pictures used for inter-prediction of the current picture are the active reference pictures of the current picture.
  • ITU-T H.265/High Efficiency Video Coding specifies the reference pictures in the reference picture set (RPS).
  • RPS may be signaled in the sequence parameter set (SPS) or slice header (SH).
  • SPS sequence parameter set
  • SH slice header
  • ITU-T H.264/ Advanced Video Coding specifies the reference pictures based on two marking mechanisms: the implicit sliding window process, and the explicit memory management control operation process. Usually up to 16 unique reference pictures are allowed given the decoded picture buffer size.
  • AOMedia Video 1 allows a maximum of 7 reference frames, and the frame marking function is specified in the frame header open bitstream unit (OBU).
  • the reference picture indication may be presented in a frame header OBU or the additional header of each tile.
  • reference picture marking is based on the POC value, which represents the picture output order, while the frames or PDU Sets in the bitstream are in encoding order, such that the POC value may not be continuous for the adjacent frames in the bitstream. It is not straightforward to map the POC value to the PDU Set sequence number (SN) with existing schemes.
  • H.266 supports sub-picture partitioning, in which each sub-picture may be decoded independently from other sub-pictures within the same frame.
  • a frame may contain mixed intra-coded subpictures and inter-coded subpictures, even though all subpictures may share the same reference pictures. Additional attributes may be needed for the subpicture identification and marking to facilitate the approperiate PDU handling. For example, when subpictures are independently coded and some PDUs of a PDU Set are lost, the remaining PDUs may be continuly delivered. When subpictures are not independntly coded and some PDUs are lost, the remaining PDUs may be dropped.
  • AVI supports a tile list, which contains tile data associated with a frame, and each tile can be independently decoded.
  • the tile list allows the decoder to process a subset of tiles and display corresponding parts of the frame, without the need to fully decode all the tiles in the frame.
  • FIG. 3 is a conceptual diagram illustrating a series of gradual decoder refresh (GDR) frames of video data.
  • GDR gradual decoder refresh
  • the stream is accessed starting at a stream access point.
  • Stream access points can be fully intra-prediction encoded, such that the entire frame can be decoded and playback can begin with the stream access point.
  • Such frames may be referred to as instantaneous decoder refresh (IDR) pictures.
  • IDR instantaneous decoder refresh
  • fully intra-prediction encoded frames have relatively high bitrates.
  • GDR stream access points include a series of frames including portions that are intra-prediction encoded, while other portions are inter-prediction encoded.
  • the inter-prediction encoded portions may be decodable or non-decodable, based on which side of the intra-prediction portion the inter-prediction portions occur.
  • GDR enables encoders to smooth the bit rate of a bitstream by distributing intracoded slices or blocks in multiple pictures as contrasted with intra coding entire pictures, thus allowing significant end-to-end delay reduction, which is especially important for ultralow-delay applications.
  • some areas of the picture cannot be correctly decoded, after decoding a number of additional pictures referred to as the recovery period, the entire picture for the recovery point and all subsequent pictures in output order would be correctly decoded.
  • FIG. 3 shows an example of such GDR picture recovery period, where clean areas and intra-predicted areas are areas that can be correctly decoded, and dirty areas are areas cannot be correctly decoded for random access.
  • the PDUs of the dirty areas that cannot be correctly decoded may be discarded when the random access occurred at the associated GDR picture.
  • the identification and marking of PDUs and PDU Sets of the GDR picture has not been addressed in the candidate solutions.
  • GDR frames 140A-140D are shown.
  • GDR frame 140A includes intra-prediction encoded area 142A and non- decodable inter-prediction encoded area 144 A.
  • GDR frame MOB includes decodable inter-prediction area 146B, intra-prediction encoded area 142B, and non-decodable inter-prediction encoded area 144B.
  • GDR frame 140C includes decodable interprediction area 146C, intra-prediction encoded area 142C, and non-decodable interprediction encoded area 144C.
  • GDR frame 140D includes decodable inter-prediction area 146D and intra-prediction encoded area 142D.
  • Non-decodable inter-prediction encoded areas 144A-144C are not decodable, when performing random access starting from GDR frame 140A, because non- decodable inter-prediction encoded areas 144A-144C may refer to reference frames preceding GDR frame 140A in coding order. Decodable inter-prediction areas 146B- 146D are decodable because they can only be predicted from reference frames starting at GDR frame 140 A, and only from either decodable inter-prediction areas or intraprediction areas.
  • a picture is not output when the PictureOutputFlag is equal to 0.
  • a picture is not output when it is a random access skippable leading (RASL) picture and NoOutputBeforeRecoveryFlag of the associated Intra Random Access Point (IRAP) picture is equal to 1 ;
  • a picture is not output when it is a Gradual Decoding Refresh (GDR) picture with NoOutputBeforeRecoveryFlag equal to 1 or is a recovering picture of a GDR picture with NoOutputBeforeRecoveryFlag equal to 1;
  • a picture is not output when its ph pic output flag is equal to 0.
  • the value of NoOutputBeforeRecoveryFlag may be set by the external means or by the associated picture position in the bitstream.
  • a PDU Set may be discarded if it is not output and not used as reference for the following picture prediction.
  • FIG. 4 is a conceptual diagram illustrating example sets of identifying information for protocol data units (PDUs).
  • PDUs protocol data units
  • This disclosure describes techniques in which high level syntax designs in video codecs may be used to facilitate PDU and PDU Set identification and marking.
  • the identifier of a video frame e.g., POC value in AVC/HEVC/VVC, display_frame_id or current_frame_id in AVI
  • An application function may indicate the video frame properties such as frame type, priority, and dependency to UPF 102 (FIG. 2) via a Policy Control Function (PCF) and a Session Management Function (SMF) using the video frame identifier.
  • PCF Policy Control Function
  • SMF Session Management Function
  • UPF 102 may classify video packet for QoS marking by inspecting the frame identifier of the data packet and linking it to the corresponding properties for QoS marking.
  • FIG. 4 depicts an example of using a video frame ID as an index to link a PDU Set to a table or list containing the associated video data properties.
  • the slice or tile identifier (e.g., slice segment address in HEVC, slice address in VVC, tile count in AVI) indicating a specific slice/tile within a frame may be signaled in the RTP header extension, together with the frame identifier.
  • the slice/tile identifier may be mapped to a PDU attribute field.
  • the AF may indicate the video slice/tile properties using both frame identifier and slice/tile identifier to UPF 102.
  • UPF 102 may classify video packets for QoS marking by inspecting the frame identifier and slice/tile identifier of the data packet and linking these values to the corresponding properties for QoS marking.
  • the frame marking RTP header extension may include priority marking information.
  • the frame marking RTP header extension may include data indicating whether a corresponding frame is intra-prediction encoded (an I-frame), a discardable frame (D), a temporal ID (TID) for the frame, and/or a layer ID (LID). Any or all of this data may represent the PDU Set priority marking.
  • priority attribute data may be signaled in a network abstraction layer (NAL) unit header, a specific NAL unit, or a supplemental enhancement information (SEI) message for the PDU Set priority marking inspection.
  • NAL network abstraction layer
  • SEI Supplemental Enhancement Information
  • a NAL unit priority indication may be signaled in the NAL unit header or header extension.
  • the priority indication may be a 3 -bit code which specifies the priority of the NAL unit associated, it takes values between 1 and 7, with 1 representing the highest priority and 7, the lowest priority.
  • Table 1 below is an example representing an indication of the NAL unit priority in the NAL unit header extension.
  • “[added: represents additions relative to existing NAL unit syntax, e.g., of ITU-T H.266, while “[removed: represents deletions relative to existing NAL unit syntax.
  • Semantics for the added syntax elements in the example of Table 1 may be as follows:
  • nuh extension flag 0 specifies that no nuh_priority and muh_reserved_zero_5bits syntax elements are present in the NAL unit header syntax structure
  • nuh extension flag 1 specifies that nuh priority and muh_reserved_zero_5bits syntax elements might be present in the NAL unit header syntax structure.
  • nuh_priority specifies the NAL unit priority. nuh_priority equal to 1 represents the highest priority and 7 represents the lowest priority.
  • a picture priority indication, aud_priority may be signaled in the AU delimiter (AUD) as shown in Table 2:
  • Semantics for the added syntax element in the example of Table 2 may be as follows:
  • aud_priority specifies the priority of the access unit (AU) containing the AU delimiter, aud_priority equal to 1 represents the highest priority and 7 represents the lowest priority.
  • the picture priority indication may be signaled in the picture header (PH), as shown in Table 3:
  • Semantics for the added syntax element in the example of Table 3 may be as follows:
  • ph_priority specifies the priority of the current picture. ph_priority equal to 1 represents the highest priority and 7 represents the lowest priority. The lowest priority implies the current picture is never used as a reference picture.
  • the picture priority indication may specify the priority among the pictures sharing the same TID and LID of current picture.
  • the PDU Set priority marking may be derived from ph_priority, TID, and LID.
  • the slice priority indication may be signaled in the slice header (SH) to indicate the priority among slices within the same picture, or slices of the pictures sharing the same TID and LID.
  • SH slice header
  • the picture or slice priority may be signaled in an SEI message to indicate the picture priority, or slice priority using the slice address.
  • the frame priority indication may be signaled in a specific OBU for AVI codec, such as Frame header OBU or a metadata OBU.
  • the Tile priority indication may be signaled in Tile group OBU or Tile list OBU.
  • Table 4 is an example of a frame header OBU syntax with frame priority indication:
  • the frame priority syntax element may be signaled or mapped to other protocols such as RTP header extension or GPRS Tunneling Protocol User Plane (GTP-U) extension header.
  • RTP header extension or GPRS Tunneling Protocol User Plane (GTP-U) extension header.
  • GTP-U GPRS Tunneling Protocol User Plane
  • the distance (e.g., POC distance) between the current picture and the active reference picture in the bitstream may be marked in a specific NAL unit or an SEI message. All active reference pictures precede the current picture in the bitstream. Depending on the PDU Set boundary, the distance may be in the unit of access unit (AU) or picture unit (PU).
  • AU access unit
  • PU picture unit
  • Table 5 below is an example syntax structure that indicates the active reference picture list in the bitstream relevant to the current picture.
  • the list includes short-term active reference pictures, long-term active reference pictures, and inter-layer active reference pictures.
  • Semantics for the syntax elements in the example of Table 5 may be as follows: [0095] num_st_act_ref_pos specifies the number of short term active reference picture positions in the syntax structure.
  • st_act_ref_delta_pos specifies the distance between current picture and the active short term reference picture in units of access unit (AU).
  • an AU For single layer bitstream, an AU contains a picture. Assuming the sequence number (SN) of current PDU Set is N, the SN of the PDU Set containing the i-th short term active reference picture is ( N - st_act_ref_delta_pos[ i ] ).
  • st act ref delta pos may specify the distance between the current PU and the active short term reference picture in units of picture.
  • num_lt_act_ref_pos specifies the number of long term active reference picture positions in the syntax structure.
  • lt_ act ref delta pos specifies the distance between current picture and the active short term reference picture in units of AU.
  • an AU contains a picture. Assuming the SN of current picture or PDU Set is N, the SN of the PDU Set containing the i-th long term active reference picture is ( N - st_act_ref_delta_pos[ i ] ).
  • lt_act_ref_delta_pos may specify the distance between the current picture and active long term reference picture in units of picture.
  • num_il_act_ref_pos specifies the number of inter-layer active reference picture positions in the syntax structure.
  • lt_ act ref delta pos specifies the layer difference between current picture and the active inter-layer reference picture.
  • the SN of the PDU Set containing the i-th inter-layer active reference picture is ( N - il_act_ref_delta_pos[ i ] ).
  • the data length of st_act_ref_delta_pos, lt_act_ref_delta_pos and il_act_ref_delta_pos may be extended to accomondate the number of layers when each PDU Set contains a picture instead of an AU.
  • Table 6 below depicts an example of a simplified active reference picture list structure:
  • Semantics for the syntax elements of the example of Table 6 may be as follows: [0109] num_act_ref_pos specifies the number of active reference picture positions in the syntax structure.
  • act_ref_delta_pos specifies the distance between current picture and the active reference picture in units of access unit (AU). Assuming the SN of PDU Set is N, the SN of the PDU Set containing the i-th active reference picture is ( N - act_ref_delta_pos[ i ] ).
  • num il act ref pos specifies the number of inter-layer active reference picture positions in the syntax structure.
  • Il act ref delta pos specifies the distance between current picture and the active reference picture in units of picture unit (PU). Assuming the SN of PDU Set is N, the SN of the PDU Set containing the i-th inter-layer active reference picture is ( N - il_act_ref_delta_pos[ i ] ).
  • the proposed active reference picture position syntax structure may be carried in the AU delimiter (AUD) or an SEI message.
  • the reference picture distance in the above syntax structure may be replaced by a sequence number indicating the AU or PU position in the bitstream. Such a sequence number increases by one for every new AU or PU added to the bitstream.
  • the sequence number may map to PDU Set sequence number to facilitate the derivation of the frame dependency.
  • the position of the reference pictures in the decoding order may be signaled in a specific metadata type OBU (e.g., metadata_itut_t35) or a new metadata type OBU in AVI to indicate the dependency of current picture in the bitstream.
  • a specific metadata type OBU e.g., metadata_itut_t35
  • a new metadata type OBU in AVI to indicate the dependency of current picture in the bitstream.
  • the proposed active reference picture list syntax elements may be signaled or mapped to other protocols such as RTP or GTP-U extended header.
  • the active reference picture list struct may be carried in an active reference picture position list SEI message.
  • a persistence flag is proposed in the SEI message to indicate if the active reference picture list applicable to current picture only, or applicable to current picture and all subsequent pictures of the current layer. The persistence may be cancelled when a proposed cancel flag is set or a new coded layer video sequence (CLVS) of current layer begins.
  • CLVS coded layer video sequence
  • An independent PDU marking may be signaled to indicate whether a current PDU can be decoded without other PDUs of the same PDU Set.
  • An independent PDU may be transported even though other PDUs in the same PDU Set are lost.
  • An independent PDU can be a slice including an independent sub-picture, or a slice consisting of a motion constrained tile set (MCTS) in a H265 bitstream.
  • MCTS motion constrained tile set
  • a syntax element may be signaled in a specific video codec NAL unit to indicate all slices boundaries in the CLVS (when signaled in SPS) or the associated picture (when signaled in PPS, PH, or AUD) are treated as picture boundaries and there is no loop filtering across the slice boundaries.
  • Each slice can be independently decoded without involving other slice samples in the same picture.
  • a syntax element may be signaled in a slice header (SH) to indicate whether a current slice can be independently decoded without prediction from other samples within the same frame.
  • SH slice header
  • the independent PDU marking may be set for each PDU in case a PDU is an AVI tile, and a tile list OBU is present.
  • a PDU Set of a random access skippable leading (RASL) picture may be marked as a discard picture in the frame marking or PDU Set marking when NoRaslOutputFlag of an associated intra random access point (HEAP) picture is equal to 1.
  • RASL random access skippable leading
  • a PDU Set of a RASL picture with pps_mixed_nalu_types_in_pic_flag equal to 0 may be marked as a discard picture in the frame marking or PDU Set marking when NoOutputBeforeRecoveryFlag of the associated IRAP picture is equal to 1.
  • a syntax element may be signaled to indicate whether the associated picture may be discarded for transport or decoding in a specific NAL unit such as AUD, PH or SH, or an SEI message.
  • a RASL picture with pps mixed nalu types in pic flag equal to 1 may contain both one or more RASL subpictures and one or more random access decodable leading (RADL) subpictures.
  • the RADL subpicture may be used as an active reference subpicture, which should not be discarded. While the RASL subpicture or slices of the RASL picture can be discarded, the associated PDU may be marked as a discardable PDU.
  • the picture header syntax element indicates the current picture is never used as a reference picture.
  • Such a syntax element can be used for discardable marking.
  • a discardable or non-reference indication may be signaled in a Frame header OBU uncompressed header or a tile group OBU to indicate whether the associated frame is not used as reference picture for the following pictures and may be discarded without impacting the decoding process.
  • Table 7 below is an example of such a frame header OBU uncompressed header syntax with a discardable frame indication, indicated by the tag: “[added:
  • an indicator may be signaled in an SEI message or a re-writable field of a specific NAL unit, or a metadata type OBU, to indicate whether the associated NAL unit can be discarded or not.
  • a slice may be marked as a discardable slice when any of the following conditions is true: the current picture is a RASL picture and the current slice NAL unit type is RASL; the current picture is a GDR picture and the current slice is a P or B slice (uni -directional inter-predicted or bi-directional inter-predicted); or the current picture is a recovering picture of the GDR picture and the current slice is a P or B slice following a preceding I slice in the same picture.
  • FIG. 5 is a block diagram illustrating elements of an example video file 150.
  • video files in accordance with the ISO base media file format and extensions thereof store data in a series of objects, referred to as “boxes.”
  • video file 150 includes file type (FTYP) box 152, movie (MOOV) box 154, segment index (sidx) boxes 162, movie fragment (MOOF) boxes 164, and movie fragment random access (MFRA) box 166.
  • FYP file type
  • MOOV movie
  • sidx segment index
  • MOOF movie fragment
  • MFRA movie fragment random access
  • media files may include other types of media data (e.g., audio data, timed text data, or the like) that is structured similarly to the data of video file 150, in accordance with the ISO base media file format and its extensions.
  • media data e.g., audio data, timed text data, or the like
  • File type (FTYP) box 152 generally describes a file type for video file 150.
  • File type box 152 may include data that identifies a specification that describes a best use for video file 150.
  • File type box 152 may alternatively be placed before MOOV box 154, movie fragment boxes 164, and/or MFRA box 166.
  • MOOV box 154 in the example of FIG. 5, includes movie header (MVHD) box 156, track (TRAK) box 158, and one or more movie extends (MVEX) boxes 160.
  • MVHD box 156 may describe general characteristics of video file 150.
  • MVHD box 156 may include data that describes when video file 150 was originally created, when video file 150 was last modified, a timescale for video file 150, a duration of playback for video file 150, or other data that generally describes video file 150.
  • TRAK box 158 may include data for a track of video file 150.
  • TRAK box 158 may include a track header (TKHD) box that describes characteristics of the track corresponding to TRAK box 158.
  • TKHD track header
  • TRAK box 158 may include coded video pictures, while in other examples, the coded video pictures of the track may be included in movie fragments 164, which may be referenced by data of TRAK box 158 and/or sidx boxes 162.
  • video file 150 may include more than one track.
  • MOOV box 154 may include a number of TRAK boxes equal to the number of tracks in video file 150.
  • TRAK box 158 may describe characteristics of a corresponding track of video file 150.
  • TRAK box 158 may describe temporal and/or spatial information for the corresponding track.
  • a TRAK box similar to TRAK box 158 of MOOV box 154 may describe characteristics of a parameter set track, when encapsulation unit 30 (FIG. 1) includes a parameter set track in a video file, such as video file 150.
  • Encapsulation unit 30 may signal the presence of sequence level SEI messages in the parameter set track within the TRAK box describing the parameter set track.
  • MVEX boxes 160 may describe characteristics of corresponding movie fragments 164, e.g., to signal that video file 150 includes movie fragments 164, in addition to video data included within MOOV box 154, if any.
  • coded video pictures may be included in movie fragments 164 rather than in MOOV box 154. Accordingly, all coded video samples may be included in movie fragments 164, rather than in MOOV box 154.
  • MOOV box 154 may include a number of MVEX boxes 160 equal to the number of movie fragments 164 in video file 150.
  • Each of MVEX boxes 160 may describe characteristics of a corresponding one of movie fragments 164.
  • each MVEX box may include a movie extends header box (MEHD) box that describes a temporal duration for the corresponding one of movie fragments 164.
  • MEHD movie extends header box
  • encapsulation unit 30 may store a sequence data set in a video sample that does not include actual coded video data.
  • a video sample may generally correspond to an access unit, which is a representation of a coded picture at a specific time instance.
  • the coded picture include one or more VCL NAL units, which contain the information to construct all the pixels of the access unit and other associated non-VCL NAL units, such as SEI messages.
  • encapsulation unit 30 may include a sequence data set, which may include sequence level SEI messages, in one of movie fragments 164.
  • Encapsulation unit 30 may further signal the presence of a sequence data set and/or sequence level SEI messages as being present in one of movie fragments 164 within the one of MVEX boxes 160 corresponding to the one of movie fragments 164.
  • SIDX boxes 162 are optional elements of video file 150. That is, video files conforming to the 3 GPP file format, or other such file formats, do not necessarily include SIDX boxes 162. In accordance with the example of the 3GPP file format, a SIDX box may be used to identify a sub-segment of a segment (e.g., a segment contained within video file 150).
  • the 3GPP file format defines a sub-segment as “a self-contained set of one or more consecutive movie fragment boxes with corresponding Media Data box(es) and a Media Data Box containing data referenced by a Movie Fragment Box must follow that Movie Fragment box and precede the next Movie Fragment box containing information about the same track.”
  • the 3GPP file format also indicates that a SIDX box “contains a sequence of references to subsegments of the (sub)segment documented by the box.
  • the referenced subsegments are contiguous in presentation time.
  • the bytes referred to by a Segment Index box are always contiguous within the segment.
  • the referenced size gives the count of the number of bytes in the material referenced.”
  • SIDX boxes 162 generally provide information representative of one or more sub-segments of a segment included in video file 150. For instance, such information may include playback times at which sub-segments begin and/or end, byte offsets for the sub-segments, whether the sub-segments include (e.g., start with) a stream access point (SAP), a type for the SAP (e.g., whether the SAP is an instantaneous decoder refresh (IDR) picture, a clean random access (CRA) picture, a broken link access (BLA) picture, or the like), a position of the SAP (in terms of playback time and/or byte offset) in the sub-segment, and the like.
  • SAP stream access point
  • IDR instantaneous decoder refresh
  • CRA clean random access
  • BLA broken link access
  • Movie fragments 164 may include one or more coded video pictures.
  • movie fragments 164 may include one or more groups of pictures (GOPs), each of which may include a number of coded video pictures, e.g., frames or pictures.
  • movie fragments 164 may include sequence data sets in some examples.
  • Each of movie fragments 164 may include a movie fragment header box (MFHD, not shown in FIG. 5).
  • the MFHD box may describe characteristics of the corresponding movie fragment, such as a sequence number for the movie fragment. Movie fragments 164 may be included in order of sequence number in video file 150.
  • MFRA box 166 may describe random access points within movie fragments 164 of video file 150.
  • MFRA box 166 is generally optional and need not be included in video files, in some examples.
  • a client device such as client device 40, does not necessarily need to reference MFRA box 166 to correctly decode and display video data of video file 150.
  • MFRA box 166 may include a number of track fragment random access (TFRA) boxes (not shown) equal to the number of tracks of video file 150, or in some examples, equal to the number of media tracks (e.g., non-hint tracks) of video file 150.
  • TFRA track fragment random access
  • movie fragments 164 may include one or more stream access points (SAPs), such as IDR pictures.
  • SAPs stream access points
  • MFRA box 166 may provide indications of locations within video file 150 of the SAPs.
  • a temporal subsequence of video file 150 may be formed from SAPs of video file 150.
  • the temporal sub-sequence may also include other pictures, such as P-frames and/or B-frames that depend from SAPs.
  • Frames and/or slices of the temporal sub-sequence may be arranged within the segments such that frames/slices of the temporal sub-sequence that depend on other frames/slices of the sub-sequence can be properly decoded.
  • data used for prediction for other data may also be included in the temporal sub-sequence.
  • FIG. 6 is a flowchart illustrating an example method including sending a packet including media data according to the techniques of this disclosure.
  • a server device such as server device 60 of FIG. 1, may receive media data (200), e.g., from content preparation device 20 (FIG. 1).
  • the media data may be encapsulated media data or encoded media data.
  • the media data may include at least a portion of a frame/picture of video data, e.g., a slice or tile.
  • Server device 60 may also receive data indicating whether the media data is discardable (202) and an identifier for the media data (204).
  • the identifier may be, for example, a frame number, picture order count (POC) value, and/or other such identifier.
  • the identifier may further indicate, for example, a temporal identifier (TID), layer identifier (LID), or the like.
  • TID temporal identifier
  • LID layer identifier
  • the identifier may also indicate a particular slice, tile, or other portion of a frame or picture that the media data represents.
  • Server device 60 may then encapsulate the media data into a packet (206).
  • the media data as received may already be encapsulated in a network packet, in some examples.
  • server device 60 may add data to an RTP header extension of the packet representing the identifier (208).
  • the identifier itself may indicate whether the media data of the packet is discardable, or server device 60 may further add data to the RTP header extension indicating whether the media data of the packet is discardable.
  • the media data may be discardable if the media data is predicted from reference media data that was not transmitted to the client device, e.g., because the reference media data precedes a random access point accessed by the client device.
  • Server device 60 may then send the packet to the client device.
  • FIG. 7 is a flowchart illustrating an example method including receiving a packet including media data according to the techniques of this disclosure.
  • client device 40 of FIG. 1 may initially receive a packet including media data (250). That is, the packet may include a packet header and a payload, separate from the packet.
  • the payload may correspond to the application layer data of the packet, e.g., media data formatted according to the ISO Base Media File Format, as explained above with respect to FIG. 5.
  • the payload may include XR data, audio data, and/or video data of a PDU Set.
  • the packet header may include an RTP header extension according to the techniques of this disclosure.
  • the RTP header extension may include, among other data, an identifier for the media data of the packet.
  • Client device 40 may extract the identifier for the media data from the RTP header extension (252).
  • Client device 40 may then determine whether the media data is discardable using the identifier (254). For example, client device 40 may determine that the identifier includes a POC value for a picture of the media data, a slice or tile identifier for the picture, a TID, and/or a LID for the picture.
  • Client device 40 may further determine whether a bitstream including the media data was randomly accessed from a stream access point other than the beginning of the bitstream.
  • client device 40 may further determine whether the stream access point follows one or more reference pictures for the media data of the packet, such that the reference pictures would not have been received. If the reference pictures have not been received, client device 40 may determine that the media data is discardable.
  • client device 40 may discard the media data (258) without sending the media data to, e.g., video decoder 48 (FIG. 1).
  • client device 40 may forward the media data to video decoder 48 (260).
  • the method of FIG. 7 may also be performed by, e.g., client device 120 of FIG. 2.
  • a similar method may be performed by UPF 102 of FIG. 2.
  • UPF 102 or another device performs the method of FIG. 7, when media data is sent to a video decoder, it may be assumed that the video decoder forms part of client device 120, such that sending the media data to the video decoder includes sending the packet to client device 120.
  • the method of FIG. 7 represents an example of a method including receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extracting, from the packet header, a video frame identifier for the frame of video data; and processing the payload according to the video frame identifier.
  • Clause 1 A method of receiving video data, the method comprising: receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extracting, from the packet header, a video frame identifier for the frame of video data; and processing the payload according to the video frame identifier.
  • Clause 2 The method of clause 1, wherein the video frame identifier comprises a picture order count (POC) value.
  • POC picture order count
  • Clause 3 The method of clause 1, wherein the video frame identifier comprises a display frame identifier or a current frame identifier.
  • Clause 4 The method of any of clauses 1-3, wherein the at least portion of the frame comprises a slice of the frame, and wherein the video frame identifier includes a slice identifier for the slice.
  • Clause 5 The method of any of clauses 1-4, wherein the at least portion of the frame comprises a tile of the frame, and wherein the video frame identifier includes a tile identifier for the slice.
  • Clause 6 The method of any of clauses 1-5, further comprising determining, using the video frame identifier, one or more of a frame type for the frame, a priority for the frame, or dependency information for the frame.
  • Clause 7 The method of any of clauses 1-6, wherein the at least portion of the frame of video data comprises a protocol data unit (PDU) of a PDU Set.
  • Clause 8 The method of any of clauses 1-7, further comprising extracting, from the packet header, one or more of a network abstraction layer (NAL) unit type for the at least portion of the frame, a temporal identifier (TID) for the at least portion of the frame, a layer identifier (LID) for the at least portion of the frame, data indicating whether the at least portion of the frame is intra-prediction coded, or data indicating whether the at least portion is discardable.
  • NAL network abstraction layer
  • TID temporal identifier
  • LID layer identifier
  • Clause 9 The method of any of clauses 1-8, further comprising processing a network abstraction layer (NAL) unit header for the frame, the NAL unit header including data indicating a priority value for the frame.
  • NAL network abstraction layer
  • Clause 10 The method of any of clauses 1-9, further comprising processing an access unit delimiter (AUD) for an access unit corresponding to the frame, the AUD including data representing a priority value for the access unit.
  • AUD access unit delimiter
  • Clause 11 The method of any of clauses 1-10, further comprising processing a picture header for the frame, the picture header including data representing a priority value for the frame.
  • Clause 12 The method of any of clauses 1-8, further comprising processing an open bitstream unit (OBU) for the at least portion of the frame, the OBU including data representing a priority value for the at least portion of the frame.
  • OBU open bitstream unit
  • Clause 13 The method of clause 12, wherein the OBU comprises one of a frame header OBU or a metadata OBU.
  • Clause 14 The method of any of clauses 1-13, further comprising receiving data indicating a picture distance between the frame and a reference frame in coding order for the frame.
  • Clause 15 The method of clause 14, wherein receiving the data indicating the picture distance comprises receiving a network abstraction layer (NAL) unit including the data or a supplemental enhancement information (SEI) message including the data.
  • NAL network abstraction layer
  • SEI supplemental enhancement information
  • Clause 16 The method of any of clauses 14 and 15, wherein receiving the data indicating the picture distance between the frame and the reference frame for the frame comprises receiving data indicating picture distances between the frame and each active reference frame in coding order.
  • Clause 17 The method of any of clauses 1-16, further comprising receiving information indicating whether the portion of the frame of video data can be decoded without other portions of the frame.
  • Clause 18 The method of clause 17, wherein the portion of the frame comprises a protocol data unit (PDU), wherein the frame corresponds to a PDU Set including the PDU, and wherein the information indicates whether the PDU can be decoded without other PDUs of the PDU Set.
  • PDU protocol data unit
  • Clause 19 The method of any of clauses 1-18, further comprising receiving information indicating whether loop filtering is to be performed across one or more boundaries between the portion of the frame and one or more other portions of the frame.
  • Clause 20 The method of any of clauses 1-19, further comprising receiving information indicating whether the portion of the frame is independently coded without prediction from other portions of the frame.
  • Clause 21 The method of any of clauses 1-20, further comprising receiving data indicating that the frame is a discardable frame.
  • Clause 22 The method of clause 21, wherein receiving the data indicating that the frame is the discardable frame comprises receiving the frame in a network abstraction layer (NAL) unit, an access unit delimiter, a picture header, a slice header, a supplemental enhancement information (SEI) message, or a frame header open bitstream unit (OBU).
  • NAL network abstraction layer
  • SEI supplemental enhancement information
  • OFU frame header open bitstream unit
  • Clause 23 The method of any of clauses 1-22, wherein processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is independently coded; and in response to determining that the at least portion of the frame is independently coded, providing the at least portion of the frame to a video decoder.
  • GDR gradual decoder refresh
  • Clause 24 The method of any of clauses 1-22, wherein processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is coded relative to a reference frame; determining that the GDR frame is an ordinal first frame retrieved for a bitstream including the video data such that the reference frame has not been retrieved; and in response to the reference frame having not been retrieved, discarding the at least portion of the video frame.
  • GDR gradual decoder refresh
  • Clause 25 A device for retrieving media data, the device comprising one or more means for performing the method of any of clauses 1-24.
  • Clause 26 The device of clause 25, wherein the one or more means comprise one or more processors implemented in circuitry.
  • Clause 27 The device of clause 25, further comprising a memory configured to store video data.
  • Clause 28 The apparatus of clause 25, wherein the apparatus comprises at least one of: an integrated circuit; a microprocessor; and a wireless communication device.
  • a device for receiving media data comprising: means for receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; means for extracting, from the packet header, a video frame identifier for the frame of video data; and means for processing the payload according to the video frame identifier.
  • Clause 30 A method of receiving video data, the method comprising: receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extracting, from the packet header, a video frame identifier for the frame of video data; and processing the payload according to the video frame identifier.
  • Clause 31 The method of clause 30, wherein the video frame identifier comprises a picture order count (POC) value.
  • POC picture order count
  • Clause 32 The method of clause 30, wherein the video frame identifier comprises a display frame identifier or a current frame identifier.
  • Clause 33 The method of clause 30, wherein the at least portion of the frame comprises a slice of the frame, and wherein the video frame identifier includes a slice identifier for the slice.
  • Clause 34 The method of clause 30, wherein the at least portion of the frame comprises a tile of the frame, and wherein the video frame identifier includes a tile identifier for the slice.
  • Clause 35 The method of clause 30, further comprising determining, using the video frame identifier, one or more of a frame type for the frame, a priority for the frame, or dependency information for the frame.
  • Clause 36 The method of clause 30, wherein the at least portion of the frame of video data comprises a protocol data unit (PDU) of a PDU Set.
  • PDU protocol data unit
  • Clause 37 The method of clause 30, further comprising extracting, from the packet header, one or more of a network abstraction layer (NAL) unit type for the at least portion of the frame, a temporal identifier (TID) for the at least portion of the frame, a layer identifier (LID) for the at least portion of the frame, data indicating whether the at least portion of the frame is intra-prediction coded, or data indicating whether the at least portion is discardable.
  • NAL network abstraction layer
  • TID temporal identifier
  • LID layer identifier
  • Clause 38 The method of clause 30, further comprising processing a network abstraction layer (NAL) unit header for the frame, the NAL unit header including data indicating a priority value for the frame.
  • NAL network abstraction layer
  • Clause 39 The method of clause 30, further comprising processing an access unit delimiter (AUD) for an access unit corresponding to the frame, the AUD including data representing a priority value for the access unit.
  • AUD access unit delimiter
  • Clause 40 The method of clause 30, further comprising processing a picture header for the frame, the picture header including data representing a priority value for the frame.
  • Clause 41 The method of clause 30, further comprising processing an open bitstream unit (OBU) for the at least portion of the frame, the OBU including data representing a priority value for the at least portion of the frame.
  • OBU open bitstream unit
  • Clause 42 The method of clause 41, wherein the OBU comprises one of a frame header OBU or a metadata OBU.
  • Clause 43 The method of clause 30, further comprising receiving data indicating a picture distance between the frame and a reference frame for the frame.
  • Clause 44 The method of clause 43, wherein receiving the data indicating the picture distance comprises receiving a network abstraction layer (NAL) unit including the data or a supplemental enhancement information (SEI) message including the data.
  • NAL network abstraction layer
  • SEI Supplemental Enhancement information
  • Clause 45 The method of clause 43, wherein receiving the data indicating the picture distance between the frame and the reference frame for the frame comprises receiving data indicating picture distances between the frame and each active reference frame.
  • Clause 46 The method of clause 30, further comprising receiving information indicating whether the portion of the frame of video data can be decoded without other portions of the frame.
  • Clause 47 The method of clause 46, wherein the portion of the frame comprises a protocol data unit (PDU), wherein the frame corresponds to a PDU Set including the PDU, and wherein the information indicates whether the PDU can be decoded without other PDUs of the PDU Set.
  • PDU protocol data unit
  • Clause 48 The method of clause 30, further comprising receiving information indicating whether loop filtering is to be performed across one or more boundaries between the portion of the frame and one or more other portions of the frame.
  • Clause 49 The method of clause 30, further comprising receiving information indicating whether the portion of the frame is independently coded without prediction from other portions of the frame.
  • Clause 50 The method of clause 30, further comprising receiving data indicating that the frame is a discardable frame.
  • Clause 51 The method of clause 50, wherein receiving the data indicating that the frame is the discardable frame comprises receiving the frame in a network abstraction layer (NAL) unit, an access unit delimiter, a picture header, a slice header, a supplemental enhancement information (SEI) message, or a frame header open bitstream unit (OBU).
  • NAL network abstraction layer
  • SEI supplemental enhancement information
  • OFU frame header open bitstream unit
  • Clause 52 The method of clause 30, wherein processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is independently coded; and in response to determining that the at least portion of the frame is independently coded, providing the at least portion of the frame to a video decoder.
  • GDR gradual decoder refresh
  • processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is coded relative to a reference frame; determining that the GDR frame is an ordinal first frame retrieved for a bitstream including the video data such that the reference frame has not been retrieved; and in response to the reference frame having not been retrieved, discarding the at least portion of the video frame.
  • GDR gradual decoder refresh
  • Clause 54 A method of receiving video data, the method comprising: receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extracting, from the packet header, a video frame identifier for the frame of video data; and processing the payload according to the video frame identifier.
  • Clause 55 The method of clause 54, wherein the video frame identifier comprises a picture order count (POC) value.
  • POC picture order count
  • Clause 56 The method of clause 54, wherein the video frame identifier comprises a display frame identifier or a current frame identifier.
  • Clause 57 The method of clause 54, wherein the at least portion of the frame comprises a slice of the frame, and wherein the video frame identifier includes a slice identifier for the slice.
  • Clause 58 The method of clause 54, wherein the at least portion of the frame comprises a tile of the frame, and wherein the video frame identifier includes a tile identifier for the tile.
  • Clause 59 The method of clause 54, further comprising determining, using the video frame identifier, one or more of a frame type for the frame, a priority for the frame, or dependency information for the frame.
  • Clause 60 The method of clause 54, wherein the at least portion of the frame of video data comprises a protocol data unit (PDU) of a PDU Set.
  • PDU protocol data unit
  • Clause 61 The method of clause 54, further comprising extracting, from the packet header, one or more of a network abstraction layer (NAL) unit type for the at least portion of the frame, a temporal identifier (TID) for the at least portion of the frame, a layer identifier (LID) for the at least portion of the frame, data indicating whether the at least portion of the frame is intra-prediction coded, or data indicating whether the at least portion is discardable.
  • NAL network abstraction layer
  • TID temporal identifier
  • LID layer identifier
  • Clause 62 The method of clause 54, further comprising processing a network abstraction layer (NAL) unit header for the frame, the NAL unit header including data indicating a priority value for the frame.
  • NAL network abstraction layer
  • Clause 63 The method of clause 54, further comprising processing an access unit delimiter (AUD) for an access unit corresponding to the frame, the AUD including data representing a priority value for the access unit.
  • AUD access unit delimiter
  • Clause 64 The method of clause 54, further comprising processing a picture header for the frame, the picture header including data representing a priority value for the frame.
  • Clause 65 The method of clause 54, further comprising processing an open bitstream unit (OBU) for the at least portion of the frame, the OBU including data representing a priority value for the at least portion of the frame.
  • OBU open bitstream unit
  • Clause 66 The method of clause 65, wherein the OBU comprises one of a frame header OBU or a metadata OBU.
  • Clause 67 The method of clause 54, further comprising receiving data indicating a picture distance between the frame and a reference frame for the frame.
  • Clause 68 The method of clause 67, wherein receiving the data indicating the picture distance comprises receiving a network abstraction layer (NAL) unit including the data or a supplemental enhancement information (SEI) message including the data.
  • NAL network abstraction layer
  • SEI Supplemental Enhancement information
  • Clause 69 The method of clause 67, wherein receiving the data indicating the picture distance between the frame and the reference frame for the frame comprises receiving data indicating picture distances between the frame and each active reference frame.
  • Clause 70 The method of clause 54, further comprising receiving information indicating whether the portion of the frame of video data can be decoded without other portions of the frame.
  • Clause 71 The method of clause 70, wherein the portion of the frame comprises a protocol data unit (PDU), wherein the frame corresponds to a PDU Set including the PDU, and wherein the information indicates whether the PDU can be decoded without other PDUs of the PDU Set.
  • PDU protocol data unit
  • Clause 72 The method of clause 54, further comprising receiving information indicating whether loop filtering is to be performed across one or more boundaries between the portion of the frame and one or more other portions of the frame.
  • Clause 73 The method of clause 54, further comprising receiving information indicating whether the portion of the frame is independently coded without prediction from other portions of the frame.
  • Clause 74 The method of clause 54, further comprising receiving data indicating that the frame is a discardable frame.
  • Clause 75 The method of clause 74, wherein receiving the data indicating that the frame is the discardable frame comprises receiving the frame in a network abstraction layer (NAL) unit, an access unit delimiter, a picture header, a slice header, a supplemental enhancement information (SEI) message, or a frame header open bitstream unit (OBU).
  • NAL network abstraction layer
  • SEI supplemental enhancement information
  • OFU frame header open bitstream unit
  • Clause 76 The method of clause 54, wherein processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is independently coded; and in response to determining that the at least portion of the frame is independently coded, providing the at least portion of the frame to a video decoder.
  • GDR gradual decoder refresh
  • processing the payload comprises: determining that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is coded relative to a reference frame; determining that the GDR frame is an ordinal first frame retrieved for a bitstream including the video data such that the reference frame has not been retrieved; and in response to the reference frame having not been retrieved, discarding the at least portion of the video frame.
  • GDR gradual decoder refresh
  • a device for retrieving media data comprising: a memory; and a processing system comprising one or more processors implemented in circuitry, the processing system being configured to: receive a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extract, from the packet header, a video frame identifier for the frame of video data; and process the payload according to the video frame identifier.
  • Clause 79 The device of clause 78, wherein the video frame identifier comprises at least one of a picture order count (POC) value, a display frame identifier, or a current frame identifier.
  • POC picture order count
  • Clause 80 The device of clause 78, wherein the at least portion of the frame comprises a slice of the frame or a tile of the frame, and wherein the video frame identifier includes an identifier for the slice of the frame or the tile of the frame.
  • Clause 81 The device of clause 78, wherein the processing system is configured to determine, using the video frame identifier, one or more of a frame type for the frame, a priority for the frame, or dependency information for the frame.
  • Clause 82 The device of clause 78, wherein the at least portion of the frame of video data comprises a protocol data unit (PDU) of a PDU Set.
  • PDU protocol data unit
  • Clause 83 The device of clause 78, wherein the processing system is further configured to extract, from the packet header, one or more of a network abstraction layer (NAL) unit type for the at least portion of the frame, a temporal identifier (TID) for the at least portion of the frame, a layer identifier (LID) for the at least portion of the frame, data indicating whether the at least portion of the frame is intra-prediction coded, or data indicating whether the at least portion is discardable.
  • NAL network abstraction layer
  • TID temporal identifier
  • LID layer identifier
  • Clause 84 The device of clause 78, wherein the processing system is further configured to process a network abstraction layer (NAL) unit header for the frame, the NAL unit header including data indicating a priority value for the frame.
  • NAL network abstraction layer
  • Clause 85 The device of clause 78, wherein the processing system is further configured to process an access unit delimiter (AUD) for an access unit corresponding to the frame, the AUD including data representing a priority value for the access unit.
  • AUD access unit delimiter
  • Clause 86 The device of clause 78, wherein the processing system is further configured to process a picture header for the frame, the picture header including data representing a priority value for the frame.
  • Clause 87 The device of clause 78, wherein the processing system is further configured to process an open bitstream unit (OBU) for the at least portion of the frame, the OBU including data representing a priority value for the at least portion of the frame.
  • OBU open bitstream unit
  • Clause 88 The device of clause 78, wherein the processing system is further configured to receive data indicating a picture distance between the frame and a reference frame for the frame, the data being included in a network abstraction layer (NAL) unit or a supplemental enhancement information (SEI) message.
  • NAL network abstraction layer
  • SEI Supplemental Enhancement Information
  • Clause 89 The device of clause 78, wherein the processing system is further configured to receive information indicating whether the portion of the frame of video data can be decoded without other portions of the frame.
  • Clause 90 The device of clause 78, wherein the processing system is further configured to receive information indicating whether loop filtering is to be performed across one or more boundaries between the portion of the frame and one or more other portions of the frame.
  • Clause 91 The device of clause 78, wherein to process the payload, the processing system is configured to: determine that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is independently coded; and in response to determining that the at least portion of the frame is independently coded, provide the at least portion of the frame to a video decoder.
  • GDR gradual decoder refresh
  • Clause 92 The device of clause 78, wherein to process the payload, the processing system is configured to: determine that the frame is a gradual decoder refresh (GDR) frame and that the at least portion of the frame is coded relative to a reference frame; determine that the GDR frame is an ordinal first frame retrieved for a bitstream including the video data such that the reference frame has not been retrieved; and in response to the reference frame having not been retrieved, discard the at least portion of the video frame.
  • GDR gradual decoder refresh
  • a device for receiving video data comprising: means for receiving a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; means for extracting, from the packet header, a video frame identifier for the frame of video data; and means for processing the payload according to the video frame identifier.
  • Clause 94 A computer-readable storage medium having stored thereon instructions that, when executed, cause a processor to: receive a packet including a packet header and a payload including at least a portion of a frame of video data, the packet header being separate from the payload; extract, from the packet header, a video frame identifier for the frame of video data; and process the payload according to the video frame identifier.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • computer- readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code, and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set).
  • IC integrated circuit
  • Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
  • Various examples have been described. These and other examples are within the scope of the following claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un exemple de dispositif de récupération de données multimédias comprenant une mémoire ; et un système de traitement comprenant un ou plusieurs processeurs mis en œuvre dans des circuits, le système de traitement étant configuré pour : recevoir un paquet comprenant un en-tête de paquet et une charge utile comprenant au moins une partie d'une trame de données vidéo, l'en-tête de paquet étant séparé de la charge utile ; extraire, de l'en-tête de paquet, un identifiant de trame vidéo pour la trame de données vidéo ; et traiter la charge utile selon l'identifiant de trame vidéo.
PCT/US2023/024836 2022-11-01 2023-06-08 Identification et marquage d'unités de données vidéo pour le transport en réseau de données vidéo WO2024096934A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263381902P 2022-11-01 2022-11-01
US63/381,902 2022-11-01
US18/329,994 US20240146994A1 (en) 2022-11-01 2023-06-06 Identifying and marking video data units for network transport of video data
US18/329,994 2023-06-06

Publications (1)

Publication Number Publication Date
WO2024096934A1 true WO2024096934A1 (fr) 2024-05-10

Family

ID=87070969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/024836 WO2024096934A1 (fr) 2022-11-01 2023-06-08 Identification et marquage d'unités de données vidéo pour le transport en réseau de données vidéo

Country Status (2)

Country Link
TW (1) TW202420830A (fr)
WO (1) WO2024096934A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036999A1 (en) * 2012-06-29 2014-02-06 Vid Scale Inc. Frame prioritization based on prediction information
US20140049604A1 (en) * 2012-08-16 2014-02-20 Qualcomm Incorporated Constructing reference picture lists for multi-view or 3dv video coding
US20140086315A1 (en) * 2012-09-25 2014-03-27 Apple Inc. Error resilient management of picture order count in predictive coding systems
WO2021122070A1 (fr) * 2019-12-19 2021-06-24 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction d'en-tête d'image
US20210211524A1 (en) * 2016-11-04 2021-07-08 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for generating protocol data unit (pdu) packet
WO2022144184A1 (fr) * 2020-12-28 2022-07-07 Koninklijke Kpn N.V. Décodage et sortie classés par ordre de priorité de parties d'une image dans un codage vidéo
US20220217413A1 (en) * 2019-09-24 2022-07-07 Huawei Technologies Co., Ltd. Signaling of Picture Header in Video Coding

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036999A1 (en) * 2012-06-29 2014-02-06 Vid Scale Inc. Frame prioritization based on prediction information
US20140049604A1 (en) * 2012-08-16 2014-02-20 Qualcomm Incorporated Constructing reference picture lists for multi-view or 3dv video coding
US20140086315A1 (en) * 2012-09-25 2014-03-27 Apple Inc. Error resilient management of picture order count in predictive coding systems
US20210211524A1 (en) * 2016-11-04 2021-07-08 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for generating protocol data unit (pdu) packet
US20220217413A1 (en) * 2019-09-24 2022-07-07 Huawei Technologies Co., Ltd. Signaling of Picture Header in Video Coding
WO2021122070A1 (fr) * 2019-12-19 2021-06-24 Telefonaktiebolaget Lm Ericsson (Publ) Prédiction d'en-tête d'image
WO2022144184A1 (fr) * 2020-12-28 2022-07-07 Koninklijke Kpn N.V. Décodage et sortie classés par ordre de priorité de parties d'une image dans un codage vidéo

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3GPP TR 23.700-60
KENICHI YAMAMOTO ET AL: "KI #5, Sol #24: Miscellaneous correction to Solution #24", vol. 3GPP SA 2, no. Online; 20220817 - 20220826, 10 August 2022 (2022-08-10), XP052185002, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/tsg_sa/WG2_Arch/TSGS2_152E_Electronic_2022-08/Docs/S2-2206607.zip S2-2206607 KI#5, Sol#24 Miscellaneous correction to Solution #24.docx> [retrieved on 20220810] *

Also Published As

Publication number Publication date
TW202420830A (zh) 2024-05-16

Similar Documents

Publication Publication Date Title
US11706502B2 (en) Segment types as delimiters and addressable resource identifiers
US20230283863A1 (en) Retrieving and accessing segment chunks for media streaming
US9992555B2 (en) Signaling random access points for streaming video data
KR102434300B1 (ko) 샘플 엔트리들 및 랜덤 액세스
EP2589222B1 (fr) Signalisation d&#39;échantillons vidéo pour représentations vidéo en mode d&#39;enrichissement
CN112154672A (zh) 在清单文件中信令通知用于网络流式传输的媒体数据的缺失部分
KR102434299B1 (ko) 샘플 엔트리들 및 랜덤 액세스
CN116762346A (zh) 媒体数据的后台数据业务分布
US20240146994A1 (en) Identifying and marking video data units for network transport of video data
WO2024096934A1 (fr) Identification et marquage d&#39;unités de données vidéo pour le transport en réseau de données vidéo
US20240267421A1 (en) Signaling media timing information from a media application to a network element
US20240098307A1 (en) Automatic generation of video content in response to network interruption
US11943501B2 (en) Dynamic resolution change hints for adaptive streaming
US20210344992A1 (en) Calculating start time availability for streamed media data
CN114430909A (zh) 用于自适应比特率组播的修复机制

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23736552

Country of ref document: EP

Kind code of ref document: A1