US20140105309A1 - Systems and methods for signaling and performing temporal level switching in scalable video coding - Google Patents

Systems and methods for signaling and performing temporal level switching in scalable video coding Download PDF

Info

Publication number
US20140105309A1
US20140105309A1 US14/072,638 US201314072638A US2014105309A1 US 20140105309 A1 US20140105309 A1 US 20140105309A1 US 201314072638 A US201314072638 A US 201314072638A US 2014105309 A1 US2014105309 A1 US 2014105309A1
Authority
US
United States
Prior art keywords
temporal
temporal level
picb
decoding
picc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/072,638
Inventor
Alexandros Eleftheriadis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidyo Inc
Original Assignee
Vidyo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2006/028366 external-priority patent/WO2008082375A2/en
Priority claimed from PCT/US2006/028367 external-priority patent/WO2007075196A1/en
Priority claimed from PCT/US2006/028365 external-priority patent/WO2008060262A1/en
Priority claimed from PCT/US2006/061815 external-priority patent/WO2007067990A2/en
Priority claimed from PCT/US2006/062569 external-priority patent/WO2007076486A2/en
Priority claimed from PCT/US2007/062357 external-priority patent/WO2007095640A2/en
Priority claimed from PCT/US2007/063335 external-priority patent/WO2007103889A2/en
Priority claimed from PCT/US2007/065003 external-priority patent/WO2007112384A2/en
Priority claimed from PCT/US2007/065554 external-priority patent/WO2007115133A2/en
Priority claimed from PCT/US2007/080089 external-priority patent/WO2008042852A2/en
Priority to US14/072,638 priority Critical patent/US20140105309A1/en
Application filed by Vidyo Inc filed Critical Vidyo Inc
Priority to US14/156,243 priority patent/US8861613B2/en
Publication of US20140105309A1 publication Critical patent/US20140105309A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04N19/00533
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/31Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities

Definitions

  • the present invention relates to video communication systems.
  • the invention relates to communication systems that use temporally scalable video coding and in which a receiver or intermediate gateway switches from one temporal level to a higher or lower level to meet frame rate, bit rate, processing power, or other system requirements
  • New digital video and audio ‘scalable’ coding techniques which aim to generally improve coding efficiency, have a number of new structural characteristics (e.g., scalability).
  • scalable coding an original or source signal is represented using two or more hierarchically structured bitstreams.
  • the hierarchical structure implies that decoding of a given bitstream depends on the availability of some or all other bitstreams that are lower in hierarchy.
  • SNR signal-to-noise ratio
  • scalable does not refer to a numerical magnitude or scale, but refers to the ability of the encoding technique to offer a set of different bitstreams corresponding to efficient representations of the original or source signal at different ‘scales’ of resolutions or other signal qualities.
  • the ITU-T H.264 Annex G specification which is referred to as Scalable Video Coding (SVC)
  • SVC Scalable Video Coding
  • AVC Advanced Video Coding
  • ITU G.729.1 also known as G.729EV
  • G.729EV is an example of a standard offering scalable audio coding.
  • the concept of scalability was introduced in video and audio coding as a solution to distribution problems in streaming and broadcasting, and to allow a given communication system to operate with varying access networks (e.g., clients connected with different bandwidths), under varying network conditions (e.g., bandwidth fluctuations), and with various client devices (e.g., a personal computer that uses a large monitor vs. a handheld device with a much smaller screen).
  • varying access networks e.g., clients connected with different bandwidths
  • network conditions e.g., bandwidth fluctuations
  • client devices e.g., a personal computer that uses a large monitor vs. a handheld device with a much smaller screen.
  • Scalable video coding techniques which are specifically designed for interactive video communication applications such as videoconferencing, are described in commonly assigned International patent application PCT/US06/028365.
  • commonly assigned International patent application PCT/US06/028365 describes the design of a new type of server called the Scalable Video Communication Server (SVCS).
  • SVCS can advantageously use scalable coded video for high-quality and low-delay video communication and has a complexity, which is significantly reduced compared to traditional switching or transcoding Multipoint Control Units (MCUs).
  • MCUs Multipoint Control Units
  • PCT/US06/62569 describes a Compositing Scalable Video Coding Server (CSVCS), which has the same benefits as an SVCS but produces a single coded output bit stream.
  • CSVCS Compositing Scalable Video Coding Server
  • a source may be a transmitting endpoint that encodes and transmits live video over a communication network, a streaming server that transmits pre-coded video, or a software module that provides access to a file stored in a mass storage or other access device.
  • a receiver may be a receiving endpoint that obtains the coded video or audio bit stream over a communication network, or directly from a mass storage or other access device.
  • An intermediate processing entity in the system may be an SVCS or a CSVCS. Attention is being directed toward improving the efficiency of switching between temporal layers by receivers and intermediate processing entities.
  • Systems and methods for signaling and temporal level switching in scalable video communication systems involve signaling select information, which enables temporal level switching in both lower and higher levels can be performed at arbitrary picture positions.
  • the information is communicated as certain constraints in the temporal prediction structure of the underlying video codec.
  • the information can be used in intermediate processing systems as well as receivers in order to adapt to different system resources (e.g., frame rate, bit rate, processing power).
  • FIG. 1 is a schematic illustration of an exemplary architecture of a communication system, in accordance with the principles of the present invention
  • FIGS. 2 a - 2 c are schematic illustrations of examples of non-nested temporal layer prediction structures, in accordance with the principles of the present invention.
  • FIG. 3 is a schematic illustration of an example of a nested temporal layer prediction structure, in accordance with the principles of the present invention.
  • FIG. 4 is an illustration of exemplary syntax modifications for temporal level nesting in SVC's Sequence Parameter Set, in accordance with the principles of the present invention
  • FIG. 5 is an illustration of exemplary syntax modifications for temporal level nesting in SVC's Scalability Information SEI message, in accordance with the principles of the present invention
  • FIG. 6 is a schematic illustration of an exemplary architecture of a processing unit (encoder/server, gateway, or receiver), in accordance with the principles of the present invention.
  • FIG. 7 is a flow diagram illustrating an exemplary operation of an NAL Filtering Unit, in accordance with the principles of the present invention.
  • Systems and methods for “switching” signals in communication systems, which use scalable coding, are provided.
  • the switching systems and methods are designed for communication systems with temporal scalability.
  • FIG. 1 shows an exemplary architecture of a communication system 100 , which uses scalable coding.
  • Communication system 100 includes a media server or encoder 110 (e.g., a streaming server or a transmitting endpoint), which communicates video and/or audio signals with a client/receiver 120 over a network 130 through a media gateway 140 .
  • a media server or encoder 110 e.g., a streaming server or a transmitting endpoint
  • client/receiver 120 over a network 130 through a media gateway 140 .
  • switching systems and methods are described herein using communication system 100 as an example.
  • the description herein is limited to the video portion of communication system 100 .
  • switching systems and methods also can be used for the scalable audio portions, with the understanding that no spatial scalability dimension can be provided to an audio signal, but multi-channel coding may additionally be used in audio signal coding.
  • the systems and methods describe herein also can be used for other multimedia data (e.g., graphics) which are coded in a scalable fashion.
  • H.264 SVC coding format (‘SVC’) is used for video communication.
  • SVC H.264 SVC coding format
  • SVC is the scalable video coding extension (Annex G) of the H.264 AVC video coding standard.
  • the base layer of an SVC stream by design is compliant to the AVC specification.
  • An SVC coded bitstream can be structured into several components or layers.
  • a base layer offers a representation of the source signal at some basic fidelity dimension or level. Additional layers (enhancement layers) provide information for improved representation of the signal in the additional scalability dimensions above the basic fidelity dimension.
  • SVC offers considerable flexibility in creating bitstream structures with scalability in several dimensions, namely spatial, temporal, and fidelity or quality dimensions. It is noted that the AVC standard already supports temporal scalability through its use of reference picture lists and associated reference picture list reordering commands.
  • the layers in the coded bitstream are typically formed in a pyramidal structure, in which the decoding of a layer may require the presence of one or more lower layers.
  • the base layer is required for decoding of any of the enhancement layers in the pyramidal structure.
  • all scalable encoding techniques do not have a pyramidal structure of the layers.
  • SVC it is possible to effectively implement simulcasting by turning all inter-layer prediction modes in the encoder off.
  • the switching systems and methods described herein are applicable to all scalability formats including both pyramidal and non-pyramidal structures.
  • Scalability has features for addressing several system-level challenges, such as heterogeneous networks and/or clients, time-varying network performance, best-effort network delivery, etc. In order to be able to effectively use the scalability features, however, it is desirable that they are made accessible to system components in addition to the video encoder and decoder.
  • the switching systems and methods of the present invention are directed toward communication systems having temporal scalability (e.g., system 100 ). It is noted that use of media gateway 140 in system 100 is optional. The switching systems and methods of the present invention are also applicable when instead of media gateway 140 a direct media server-to-client connection is used, or when the media server is replaced by a file that is directly accessible to the client on a mass storage or other access device, either directly or indirectly (e.g., a file access through a communication network). It is further noted that the systems and methods of the present invention remain the same when more than one media gateway 140 is present in the path from the media server or encoder to the receiver.
  • media server/encoder 110 e.g., a streaming server or encoder a transmitting endpoint encoder
  • media gateway 140 This simple scenario requires that a connection be made between the media server and the client for transmitting an agreed-upon set of layers, which may, for example, be Remote Transport Protocol (RTP) encapsulated SVC Network Adaptation Layer (NAL) units.
  • RTP Remote Transport Protocol
  • NAL Network Adaptation Layer
  • media gateway 140 has to be instructed, or has to decide on its own, how to best operationally utilize the incoming packets (e.g., the transmitted RTP-encapsulated SVC NAL units).
  • this operational decision corresponds to a decision on which packets to drop and which to forward. Further, for proper decoder operation, client/receiver 120 must know or be able to deduce which set of layers it is supposed to receive through media gateway 140 .
  • system 100 must represent and communicate the scalability structure of the transmitted bit stream to the various system components.
  • the video signal has a four-layer scalability structure: layer L0 containing the QCIF signal at 15 fps; layer L1 containing the QCIF signal enhancement for 30 fps; layer S0 containing the CIF signal enhancement for 15 fps; and layer S1 containing the CIF signal enhancement for 30 fps.
  • the coding dependency in the four-layer scalability structure may, for example, be such that L0 is the base layer, L1 depends on L0, S0 depends on L0, and S1 depends on both L1 and S 0.
  • System 100 must describe this four-layer structure to the system components so that they can properly process the video signal.
  • Supplemental Enhancement Information (SEI) messages are data structures contained in an SVC bitstream that provide ancillary information about the coded video signal but are not necessary for the operation of the decoding process.
  • SVC offers a mechanism for describing the scalability structure of an SVC coded video bitstream through its “Scalability Information” SEI message (SSEI).
  • SSEI Scalability Information
  • the SSEI in Section G.10.1.1 of the SVC JD7 specification is designed to enable capability negotiation (e.g., during a connection setup), stream adaptation (by video server or intermediate media gateways), and low-complexity processing (e.g., without inference based on detailed bitstream parsing).
  • the SSEI defined in Section G.10.1.1 of the SVC JD7 specification, includes descriptive information about each layer (e.g., frame rate, profile information), and importantly, coding dependency information (i.e., which other layers a given layer depends on for proper decoding).
  • Each layer is identified, within the scope of the bitstream, by a unique ‘layer id’.
  • the coding dependency information for a particular layer is communicated by encoding the number of directly dependent layers (num_directly_dependent_layers), and a series of difference values (directly_dependent_layer_id_delta), which when added to the particular layer's layer id identify the layer id's of the layers that the particular layer depends on for decoding.
  • SSEI-LNP Scalability Information Layers Not Present
  • SSEI-DC Scalability Information Dependency Change
  • G.10.1.3 provide for in-band our out-of-band signaling of dynamic changes in the transmitted bitstream, respectively.
  • the former indicates which layers, comparing with the initial SSEI, are not present in the bitstream from the point it is received, whereas the latter indicates inter-layer prediction dependency changes in the bitstream.
  • International Patent Application No. PCT/US07/065003 describes these as well as additional systems and methods for managing scalability information.
  • the designs of the SSEI, SSEI-LNP, and SSEI-DC messages are such that when used in combination, the messages allow intermediate gateways or receivers to be continually informed about the overall structure of the bitstream transmitted from a server/encoder or gateway and to perform correct adaptation functions.
  • the designs of the SSEI, SSEI-LNP, and SSEI-DC messages are such that when used in combination, the messages allow intermediate gateways or receivers to be continually informed about the overall structure of the bitstream transmitted from a server/encoder or gateway and to perform correct adaptation functions.
  • the SVC JD7 draft allows temporal structures, which contradict the pyramidal structure on which layering is being built, and which can be problematic in real applications.
  • the only limitation that the SVC JD7 imposes on temporal levels is the following: “The decoding of any access unit with temporal_level equal to currT1 shall be independent of all access units with temporal_level greater than currT1.” (See G.7.4.1, NAL unit SVC header extension semantics, p. 405). This limitation ensures that a given temporal level can be decoded without access to information from higher temporal levels. It does not address, however, any dependencies that may exist within the particular temporal level as well as between the same and lower temporal levels.
  • the SVC JD7 limitation ensures that a transition from a higher temporal level to a lower temporal level can be made immediately by simply discarding all access units with a higher temporal level.
  • the reverse operation i.e., switching or transitioning from a lower temporal level to a higher temporal level, has a dependency problem.
  • FIGS. 2( a ) shows a “temporally non-nested” structure 200 a with two temporal layers, Layer 0 and Layer 1.
  • the second layer (Layer 1) is formed as a completely separate “thread” that originates in the first frame (Layer 0), Since decoding of Layer 0 does not depend on Layer 1, this is a valid structure for SVC under the SVC JD7 draft.
  • the problem transitioning from a lower temporal level to a higher temporal level with this structure is apparent for a receiver that receives only Layer 0 (at frames 0, 2, 4, etc.).
  • the receiver cannot add Layer I at will because the temporal extent of the dependency of Layer 1 from Layer 0 crosses over frames of Layer 0. If, for example, the receiver wishes to add Layer 1 at frame 2, it cannot do so by starting the decoding operation (for Layer 1) at the next frame (frame 3), since such decoding operation requires both frames 0 and 1, the latter of which was not received.
  • FIG. 2( b ) shows a similar temporally non-nested structure 200 b, with a slightly more complicated coding structure of Layers 0 and 1, A receiver/decoder cannot switch to Layer 1 at frame 2, since frame 3 is predicted from frame 1.
  • FIGS. 2 a and 2 b illustrate the problem of transitioning from a lower temporal level to a higher temporal level using structures 200 and 202 b, which for simplicity have only two layers each. It will be understood that the problem may exist with any number of temporal layers.
  • FIG. 2 c shows an exemplary structure 200 c with three temporal layers, Layers 0-2. Structure 200 c presents a similar transitioning problem because of the temporal extent of the layer dependencies.
  • FIG. 3 shows a “temporally nested” layer structure 300 , which satisfies the requirements of G.7.4.1 and also allows temporal switching from any layer to another.
  • there is no instance of temporal nesting in structure 300 for any frame i of layer N there is no frame of a temporal level M ⁇ N that is inbetween frame i and any of its reference pictures in decoding order. Equivalently, no reference picture is used for inter prediction when a succeeding reference picture in decoding order has a lower temporal level value. This condition ensures that additional temporal layers to layer N can be added immediately after any frame of layer N.
  • the systems and methods of the present invention include explicit information in the coded bitstream that (a) indicates the temporal extent of the dependency of temporal levels, and (b) provides the ability to enforce nested operation for specific application domains and profiles.
  • the information consists of single-bit flag, called “temporal_level_nesting_flag,” which is placed in SVC's Sequence Parameter Set.
  • FIG. 4 shows modified syntax 400 for the relevant section of the JD7 text (Section G.7.3.2, Sequence parameter set SVC syntax) in accordance with the principles of the present invention.
  • the added flag (temporal_level_nesting_flag) is the first one in the syntax structure.
  • the semantics of the temporal_level_nesting_flag (to be placed in G.7.4.2, Sequence parameter set SVC extension semantics in the JD7 text) are defined so that a value of 0 indicates that a reference picture shall not be used for inter prediction if a succeeding reference picture in decoding order has a lower temporal level value, whereas a value of 1 indicates that no such restriction is placed.
  • Alternative definitions of the semantics are also possible, without changing the limitation that it places on the structure of the bitstream.
  • the same temporal_level_nesting_flag is placed in the SSEI (SVC JD7, Section G.10.1.1), which has the additional benefit that all scalability information pertaining to a particular SVC bitstream is present in a single syntax structure.
  • FIG. 5 shows modified syntax 500 for this case.
  • the semantics for modified syntax 500 are identical to the semantics applicable to syntax 400 .
  • temporal level nesting flag by a media server or encoder, media gateway, or receiver/decoder involves the same operations irrespective of whether the temporal_level_nesting_flag is present in the SSEI or the Sequence Parameter Set. Since the operation is the same in both cases for all devices, for convenience, all three different types of devices are referred to herein commonly as “Processing Units”,
  • FIG. 6 shows the architecture of an exemplary Processing Unit 600 , as it relates to NAL filtering.
  • Processing Unit 600 accepts SVC NAL units at each input, and produces copies of some or all of the input NAL units at its output. The decision on which NAL units to forward to the output is performed at the NAL Filtering Unit 610 .
  • NAL Filtering Unit 610 is controlled by a NAL Filter Configuration (NFC) table 620 , which may be stored in RAM.
  • NFC 620 is a three-dimensional table, where the three dimensions T, D, and Q correspond to the temporal — level, dependency_id, and quality_id of a NAL. In FIG. 6 , the table value is shown in the PASS column.
  • NFC NAL Filter Configuration
  • a value of 1 in a table entry with particular T, D, and Q values indicates that NAL Filtering Unit 610 should forward an input NAL unit that has the same T, D, and Q values in its SVC header. Conversely, a value of 0 indicates that it should not forward the particular input NAL unit.
  • Processing Unit 600 obtains the SSEI, either in-band (from the SVC bitstream), through signaling, or other means.
  • the SSEI is stored in RAM 640 to be used for later operations.
  • NFC 620 may obtain its initial configuration after the SSEI is obtained. The initial configuration may be, for example, such that all NAL units are passed on to the output (no filtering is applied). This is dependent on the specific application.
  • Processing Unit 600 also sets an initial value to the TL memory 630 , which stores the current operating temporal level.
  • Temporal Level Switch Trigger 650 provides information to NAL Filtering Unit 610 on the desired temporal level of system operation.
  • Temporal Level Switch Trigger 650 signal may, for example, have positive, zero, or negative integer values, indicating that after the current picture the temporal level should be increased, stay the same, or be reduced, respectively, by the indicated amount.
  • NAL Filtering Unit 610 When NAL Filtering Unit 610 detects a negative value of Temporal Level Switch Trigger signal at a particular picture, it adds this value to the current operating temporal level value stored in TL memory 630 and reconfigures the NFC table 620 to reflect the desired new operating temporal level. If the addition results in a negative value, a value of 0 is stored in TL memory 630 .
  • NAL Filtering Unit 610 detects a positive Temporal Level Switch Trigger signal at a particular picture, it first checks the value of the temporal_level_nesting_flag. If the value is 0, then NAL Filtering Unit 610 cannot decide, in the absence of additional application-specific information, if it is possible to switch to the desired higher temporal level and no action is taken.
  • temporal_level_nesting_flag 1
  • the value of the Temporal Level Switch Trigger signal is added to the TL memory, and the NFC table is reconfigured to reflect the desired new operating level. If the new value of the TL memory is higher than the maximum temporal level present in the bitstream, as reflected in the SSEI, then the TL is set to that maximum temporal level value. It is noted that the maximum temporal level value can be obtained from the SSEI by parsing all the layer information contained in the SSEI, and storing the largest value of the temporal_level[i] syntax element.
  • FIG. 7 shows a flow diagram 700 of the operation of NAL Filtering Unit 610 .
  • the legend ‘TRIGGER’ designates the value of Temporal Level Switch Trigger 650 signal of FIG. 6
  • ‘TL_MAX’ designates the maximum temporal level value as obtained from the SSEI.
  • Function NFC(T, D, Q) returns the value of NFC 620 for the particular combination of T, D, and Q values.
  • NAL Filtering Unit 610 may be configured to incorporate such criteria when attempting to perform temporal level upswitching, and to also elect to perform the temporal level upswitching at a later picture, where presumably the application-specific conditions will be satisfied.
  • the software i e., instructions for implementing and operating the aforementioned rate estimation and control techniques can be provided on computer-readable media, which can include without limitation, firmware, memory, storage devices, microcontrollers, microprocessors, integrated circuits, ASICs, downloadable media, and other available media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

Media communication systems and methods for media encoded using scalable coding with temporal scalability are provided. Transmitting endpoints include switching information in their transmitted media to indicate if temporal level switching at a decoder can occur at any frame of the transmitted encoded media.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. provisional patent application Ser. No. 60/829,609 filed Oct. 16, 2006. Further, this application is a continuation-in-part of International patent application Nos. PCT/US06/28365, filed Jul. 20, 2006, PCT/US06/28366, filed Jul. 20, 2006, PCT/US06/61815, filed Dec. 8, 2006, PCT/US06/62569, filed Dec. 22, 2006, PCT/US07/80089, filed Oct. 1, 2007, PCT/US07/62357, filed Feb. 16, 2007, PCT/US07/65554, filed Mar. 29, 2007, PCT/US07/65003, filed Mar. 27, 2007, PCT/US06/28367, filed Jul. 20, 2006, and PCT/US07/63335, filed Mar. 5, 2007. All of the aforementioned applications, which are commonly assigned, are hereby incorporated by reference herein in their entireties.
  • FIELD OF THE INVENTION
  • The present invention relates to video communication systems. In particular, the invention relates to communication systems that use temporally scalable video coding and in which a receiver or intermediate gateway switches from one temporal level to a higher or lower level to meet frame rate, bit rate, processing power, or other system requirements
  • BACKGROUND OF THE INVENTION
  • New digital video and audio ‘scalable’ coding techniques, which aim to generally improve coding efficiency, have a number of new structural characteristics (e.g., scalability). In scalable coding, an original or source signal is represented using two or more hierarchically structured bitstreams. The hierarchical structure implies that decoding of a given bitstream depends on the availability of some or all other bitstreams that are lower in hierarchy. Each bitstream, together with the bitstreams it depends on, offer a representation of the original signal at a particular temporal, fidelity (e.g., in terms of signal-to-noise ratio(SNR)), or spatial resolution (for video)
  • It is understood that term ‘scalable’ does not refer to a numerical magnitude or scale, but refers to the ability of the encoding technique to offer a set of different bitstreams corresponding to efficient representations of the original or source signal at different ‘scales’ of resolutions or other signal qualities. The ITU-T H.264 Annex G specification, which is referred to as Scalable Video Coding (SVC), is an example of a video coding standard that offers video coding scalability in all of temporal, spatial, and fidelity dimensions. SVC is an extension of the H.264 standard (also known as Advanced Video Coding or AVC). An example of an earlier standard, which also offered all three types of scalability, is ISO MPEG-2 (also published as ITU-T H.262). ITU G.729.1 (also known as G.729EV) is an example of a standard offering scalable audio coding.
  • The concept of scalability was introduced in video and audio coding as a solution to distribution problems in streaming and broadcasting, and to allow a given communication system to operate with varying access networks (e.g., clients connected with different bandwidths), under varying network conditions (e.g., bandwidth fluctuations), and with various client devices (e.g., a personal computer that uses a large monitor vs. a handheld device with a much smaller screen).
  • Scalable video coding techniques, which are specifically designed for interactive video communication applications such as videoconferencing, are described in commonly assigned International patent application PCT/US06/028365. Further, commonly assigned International patent application PCT/US06/028365 describes the design of a new type of server called the Scalable Video Communication Server (SVCS). SVCS can advantageously use scalable coded video for high-quality and low-delay video communication and has a complexity, which is significantly reduced compared to traditional switching or transcoding Multipoint Control Units (MCUs). Similarly, commonly assigned International patent application PCT/US06/62569 describes a Compositing Scalable Video Coding Server (CSVCS), which has the same benefits as an SVCS but produces a single coded output bit stream. Furthermore, International patent application PCT/US07/80089 describes a Multicast Scalable Video Coding Server (MSVCS), which has the same benefits as an SVCS but utilizes available multicast communication channels. The scalable video coding design and the SVCS/CSVCS architecture can be used in further advantageous ways, which are described, for example, in commonly assigned International patent applications PCT/US06/028367, PCT/US06/027368, PCT/US06/061815, PCT/US07/62357, and PCT/US07/63335. These applications describe the use of scalable coding techniques and SVCS/CVCS architecture for effective trunking between servers, reduced jitter buffer delay, error resilience and random access, “thinning” of scalable video bitstreams to improve coding efficiency with reduced packet loss, and rate control, respectively. Further, commonly assigned International patent application PCT/US07/65554 describes techniques for transcoding between scalable video coding formats and other formats.
  • Consideration is now being given to further improving video communication systems that use scalable video coding. In such systems, a source may be a transmitting endpoint that encodes and transmits live video over a communication network, a streaming server that transmits pre-coded video, or a software module that provides access to a file stored in a mass storage or other access device. Similarly, a receiver may be a receiving endpoint that obtains the coded video or audio bit stream over a communication network, or directly from a mass storage or other access device. An intermediate processing entity in the system may be an SVCS or a CSVCS. Attention is being directed toward improving the efficiency of switching between temporal layers by receivers and intermediate processing entities.
  • SUMMARY OF THE INVENTION
  • Systems and methods for signaling and temporal level switching in scalable video communication systems are provided. The systems and methods involve signaling select information, which enables temporal level switching in both lower and higher levels can be performed at arbitrary picture positions. The information is communicated as certain constraints in the temporal prediction structure of the underlying video codec. The information can be used in intermediate processing systems as well as receivers in order to adapt to different system resources (e.g., frame rate, bit rate, processing power).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic illustration of an exemplary architecture of a communication system, in accordance with the principles of the present invention;
  • FIGS. 2 a-2 c are schematic illustrations of examples of non-nested temporal layer prediction structures, in accordance with the principles of the present invention;
  • FIG. 3 is a schematic illustration of an example of a nested temporal layer prediction structure, in accordance with the principles of the present invention;
  • FIG. 4 is an illustration of exemplary syntax modifications for temporal level nesting in SVC's Sequence Parameter Set, in accordance with the principles of the present invention;
  • FIG. 5 is an illustration of exemplary syntax modifications for temporal level nesting in SVC's Scalability Information SEI message, in accordance with the principles of the present invention;
  • FIG. 6 is a schematic illustration of an exemplary architecture of a processing unit (encoder/server, gateway, or receiver), in accordance with the principles of the present invention; and
  • FIG. 7 is a flow diagram illustrating an exemplary operation of an NAL Filtering Unit, in accordance with the principles of the present invention.
  • Throughout the figures the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Systems and methods for “switching” signals in communication systems, which use scalable coding, are provided. The switching systems and methods are designed for communication systems with temporal scalability.
  • FIG. 1 shows an exemplary architecture of a communication system 100, which uses scalable coding. Communication system 100 includes a media server or encoder 110 (e.g., a streaming server or a transmitting endpoint), which communicates video and/or audio signals with a client/receiver 120 over a network 130 through a media gateway 140.
  • The inventive “switching” systems and methods are described herein using communication system 100 as an example. For brevity, the description herein is limited to the video portion of communication system 100. It will be understood, however, that switching systems and methods also can be used for the scalable audio portions, with the understanding that no spatial scalability dimension can be provided to an audio signal, but multi-channel coding may additionally be used in audio signal coding. Further the systems and methods describe herein also can be used for other multimedia data (e.g., graphics) which are coded in a scalable fashion.
  • In a preferred embodiment of communication system 100, H.264 SVC coding format (‘SVC’) is used for video communication. (See, e.g., the SVC JD7 specification, T. Wiegand, G. Sullivan, J. Reichel, H. Schwarz, M. Wien, eds., “Joint Draft 7: Scalable Video Coding,” Joint Video Team, Doc. JVT-T201, Klagenfurt, July 2006, which is incorporated by reference herein in its entirety). SVC is the scalable video coding extension (Annex G) of the H.264 AVC video coding standard. The base layer of an SVC stream by design is compliant to the AVC specification.
  • An SVC coded bitstream can be structured into several components or layers. A base layer offers a representation of the source signal at some basic fidelity dimension or level. Additional layers (enhancement layers) provide information for improved representation of the signal in the additional scalability dimensions above the basic fidelity dimension. SVC offers considerable flexibility in creating bitstream structures with scalability in several dimensions, namely spatial, temporal, and fidelity or quality dimensions. It is noted that the AVC standard already supports temporal scalability through its use of reference picture lists and associated reference picture list reordering commands.
  • It is further noted that the layers in the coded bitstream are typically formed in a pyramidal structure, in which the decoding of a layer may require the presence of one or more lower layers. Usually, the base layer is required for decoding of any of the enhancement layers in the pyramidal structure. However, all scalable encoding techniques do not have a pyramidal structure of the layers. For example, when scalability is provided through multiple description coding or simulcasting, independent decoding of some or all layers may be possible. Specifically for SVC, it is possible to effectively implement simulcasting by turning all inter-layer prediction modes in the encoder off. The switching systems and methods described herein are applicable to all scalability formats including both pyramidal and non-pyramidal structures.
  • Scalability has features for addressing several system-level challenges, such as heterogeneous networks and/or clients, time-varying network performance, best-effort network delivery, etc. In order to be able to effectively use the scalability features, however, it is desirable that they are made accessible to system components in addition to the video encoder and decoder.
  • As previously noted, the switching systems and methods of the present invention are directed toward communication systems having temporal scalability (e.g., system 100). It is noted that use of media gateway 140 in system 100 is optional. The switching systems and methods of the present invention are also applicable when instead of media gateway 140 a direct media server-to-client connection is used, or when the media server is replaced by a file that is directly accessible to the client on a mass storage or other access device, either directly or indirectly (e.g., a file access through a communication network). It is further noted that the systems and methods of the present invention remain the same when more than one media gateway 140 is present in the path from the media server or encoder to the receiver.
  • With renewed reference to FIG. 1, consider a simple operational scenario in which media server/encoder 110 (e.g., a streaming server or encoder a transmitting endpoint encoder) communicates scalable media with client/receiver 120 through media gateway 140. This simple scenario requires that a connection be made between the media server and the client for transmitting an agreed-upon set of layers, which may, for example, be Remote Transport Protocol (RTP) encapsulated SVC Network Adaptation Layer (NAL) units. Furthermore, media gateway 140 has to be instructed, or has to decide on its own, how to best operationally utilize the incoming packets (e.g., the transmitted RTP-encapsulated SVC NAL units). In the case where media gateway 140 has an SVCS/CSVCS architecture, this operational decision corresponds to a decision on which packets to drop and which to forward. Further, for proper decoder operation, client/receiver 120 must know or be able to deduce which set of layers it is supposed to receive through media gateway 140.
  • To enable these operations, system 100 must represent and communicate the scalability structure of the transmitted bit stream to the various system components. As an illustrative example, consider a video signal with two temporal resolutions, 15 and 30 fps, and two spatial resolutions, QCIF and CIF. Thus, the video signal has a four-layer scalability structure: layer L0 containing the QCIF signal at 15 fps; layer L1 containing the QCIF signal enhancement for 30 fps; layer S0 containing the CIF signal enhancement for 15 fps; and layer S1 containing the CIF signal enhancement for 30 fps. The coding dependency in the four-layer scalability structure may, for example, be such that L0 is the base layer, L1 depends on L0, S0 depends on L0, and S1 depends on both L1 and S0. System 100 must describe this four-layer structure to the system components so that they can properly process the video signal.
  • Supplemental Enhancement Information (SEI) messages, are data structures contained in an SVC bitstream that provide ancillary information about the coded video signal but are not necessary for the operation of the decoding process. SVC offers a mechanism for describing the scalability structure of an SVC coded video bitstream through its “Scalability Information” SEI message (SSEI). The SSEI in Section G.10.1.1 of the SVC JD7 specification is designed to enable capability negotiation (e.g., during a connection setup), stream adaptation (by video server or intermediate media gateways), and low-complexity processing (e.g., without inference based on detailed bitstream parsing).
  • The SSEI, defined in Section G.10.1.1 of the SVC JD7 specification, includes descriptive information about each layer (e.g., frame rate, profile information), and importantly, coding dependency information (i.e., which other layers a given layer depends on for proper decoding). Each layer is identified, within the scope of the bitstream, by a unique ‘layer id’. The coding dependency information for a particular layer is communicated by encoding the number of directly dependent layers (num_directly_dependent_layers), and a series of difference values (directly_dependent_layer_id_delta), which when added to the particular layer's layer id identify the layer id's of the layers that the particular layer depends on for decoding.
  • Additionally, the “Scalability Information Layers Not Present” SEI message (SSEI-LNP) defined in G.10.1.2, and the “Scalability Information Dependency Change” SEI message (SSEI-DC) defined in G.10.1.3 provide for in-band our out-of-band signaling of dynamic changes in the transmitted bitstream, respectively. The former indicates which layers, comparing with the initial SSEI, are not present in the bitstream from the point it is received, whereas the latter indicates inter-layer prediction dependency changes in the bitstream. International Patent Application No. PCT/US07/065003 describes these as well as additional systems and methods for managing scalability information.
  • Generally, the designs of the SSEI, SSEI-LNP, and SSEI-DC messages are such that when used in combination, the messages allow intermediate gateways or receivers to be continually informed about the overall structure of the bitstream transmitted from a server/encoder or gateway and to perform correct adaptation functions. There are, however, important limitations in the designs, which become apparent upon close examination of different possible coding structures that may be used in real communication systems.
  • For example, the SVC JD7 draft allows temporal structures, which contradict the pyramidal structure on which layering is being built, and which can be problematic in real applications. Specifically, the only limitation that the SVC JD7 imposes on temporal levels is the following: “The decoding of any access unit with temporal_level equal to currT1 shall be independent of all access units with temporal_level greater than currT1.” (See G.7.4.1, NAL unit SVC header extension semantics, p. 405). This limitation ensures that a given temporal level can be decoded without access to information from higher temporal levels. It does not address, however, any dependencies that may exist within the particular temporal level as well as between the same and lower temporal levels. The SVC JD7 limitation ensures that a transition from a higher temporal level to a lower temporal level can be made immediately by simply discarding all access units with a higher temporal level. The reverse operation, i.e., switching or transitioning from a lower temporal level to a higher temporal level, has a dependency problem.
  • The problem can be understood with reference to FIGS. 2 a and 2 b, which show exemplary temporal layer picture prediction structures. FIGS. 2( a) shows a “temporally non-nested” structure 200 a with two temporal layers, Layer 0 and Layer 1. The second layer (Layer 1) is formed as a completely separate “thread” that originates in the first frame (Layer 0), Since decoding of Layer 0 does not depend on Layer 1, this is a valid structure for SVC under the SVC JD7 draft. The problem transitioning from a lower temporal level to a higher temporal level with this structure is apparent for a receiver that receives only Layer 0 (at frames 0, 2, 4, etc.). The receiver cannot add Layer I at will because the temporal extent of the dependency of Layer 1 from Layer 0 crosses over frames of Layer 0. If, for example, the receiver wishes to add Layer 1 at frame 2, it cannot do so by starting the decoding operation (for Layer 1) at the next frame (frame 3), since such decoding operation requires both frames 0 and 1, the latter of which was not received.
  • FIG. 2( b) shows a similar temporally non-nested structure 200 b, with a slightly more complicated coding structure of Layers 0 and 1, A receiver/decoder cannot switch to Layer 1 at frame 2, since frame 3 is predicted from frame 1.
  • FIGS. 2 a and 2 b illustrate the problem of transitioning from a lower temporal level to a higher temporal level using structures 200 and 202 b, which for simplicity have only two layers each. It will be understood that the problem may exist with any number of temporal layers. FIG. 2 c shows an exemplary structure 200 c with three temporal layers, Layers 0-2. Structure 200 c presents a similar transitioning problem because of the temporal extent of the layer dependencies.
  • It is noted temporally non-nested layer structures 200 a-200 c satisfy the requirements of G.7.4.1, however, the use of the temporal scalability feature is seriously limited. In contrast, FIG. 3 shows a “temporally nested” layer structure 300, which satisfies the requirements of G.7.4.1 and also allows temporal switching from any layer to another. As shown in the figure, there is no instance of temporal nesting in structure 300 for any frame i of layer N, there is no frame of a temporal level M<N that is inbetween frame i and any of its reference pictures in decoding order. Equivalently, no reference picture is used for inter prediction when a succeeding reference picture in decoding order has a lower temporal level value. This condition ensures that additional temporal layers to layer N can be added immediately after any frame of layer N.
  • The ability to easily add or remove temporal levels at the encoder/server, an intermediate gateway, or a receiver, is of fundamental importance in real-time, low-delay communications, as frame rate is one of the parameters that are directly available for rate bit rate and error control. It is noted that the exemplary temporal prediction structures described in International Patent Application Nos. PCT/US06/28365, PCT/US06/028366, PCT/US06/061815, and PCT/US07/63335 are all nested. While the coding dependency information is explicitly encoded in the SSEI (and SSEI-DC), it does not capture the temporal extent of the dependency. For example, structures 200 c and 300 have identical SSEI messages.
  • The systems and methods of the present invention include explicit information in the coded bitstream that (a) indicates the temporal extent of the dependency of temporal levels, and (b) provides the ability to enforce nested operation for specific application domains and profiles.
  • In one embodiment of the invention, the information consists of single-bit flag, called “temporal_level_nesting_flag,” which is placed in SVC's Sequence Parameter Set.
  • FIG. 4 shows modified syntax 400 for the relevant section of the JD7 text (Section G.7.3.2, Sequence parameter set SVC syntax) in accordance with the principles of the present invention. The added flag (temporal_level_nesting_flag) is the first one in the syntax structure. The semantics of the temporal_level_nesting_flag (to be placed in G.7.4.2, Sequence parameter set SVC extension semantics in the JD7 text) are defined so that a value of 0 indicates that a reference picture shall not be used for inter prediction if a succeeding reference picture in decoding order has a lower temporal level value, whereas a value of 1 indicates that no such restriction is placed. Alternative definitions of the semantics are also possible, without changing the limitation that it places on the structure of the bitstream.
  • In a second embodiment of the invention, the same temporal_level_nesting_flag is placed in the SSEI (SVC JD7, Section G.10.1.1), which has the additional benefit that all scalability information pertaining to a particular SVC bitstream is present in a single syntax structure. FIG. 5 shows modified syntax 500 for this case. The semantics for modified syntax 500 are identical to the semantics applicable to syntax 400.
  • The use of the temporal level nesting flag by a media server or encoder, media gateway, or receiver/decoder involves the same operations irrespective of whether the temporal_level_nesting_flag is present in the SSEI or the Sequence Parameter Set. Since the operation is the same in both cases for all devices, for convenience, all three different types of devices are referred to herein commonly as “Processing Units”,
  • FIG. 6 shows the architecture of an exemplary Processing Unit 600, as it relates to NAL filtering. Processing Unit 600 accepts SVC NAL units at each input, and produces copies of some or all of the input NAL units at its output. The decision on which NAL units to forward to the output is performed at the NAL Filtering Unit 610. In a preferred architecture, NAL Filtering Unit 610 is controlled by a NAL Filter Configuration (NFC) table 620, which may be stored in RAM. NFC 620 is a three-dimensional table, where the three dimensions T, D, and Q correspond to the temporallevel, dependency_id, and quality_id of a NAL. In FIG. 6, the table value is shown in the PASS column. A value of 1 in a table entry with particular T, D, and Q values indicates that NAL Filtering Unit 610 should forward an input NAL unit that has the same T, D, and Q values in its SVC header. Conversely, a value of 0 indicates that it should not forward the particular input NAL unit. Thus, according to NFC 620 shown in FIG. 6 the base layer (T=0, D=0, Q=0) is allowed to be forwarded to the output, but the higher temporal layer (T=1) is not.
  • During set up, Processing Unit 600 obtains the SSEI, either in-band (from the SVC bitstream), through signaling, or other means. The SSEI is stored in RAM 640 to be used for later operations. NFC 620 may obtain its initial configuration after the SSEI is obtained. The initial configuration may be, for example, such that all NAL units are passed on to the output (no filtering is applied). This is dependent on the specific application. Processing Unit 600 also sets an initial value to the TL memory 630, which stores the current operating temporal level.
  • As shown in FIG. 6, Processing Unit 600 is also equipped with an additional input, Temporal Level Switch Trigger 650. This input provides information to NAL Filtering Unit 610 on the desired temporal level of system operation. Temporal Level Switch Trigger 650 signal may, for example, have positive, zero, or negative integer values, indicating that after the current picture the temporal level should be increased, stay the same, or be reduced, respectively, by the indicated amount.
  • When NAL Filtering Unit 610 detects a negative value of Temporal Level Switch Trigger signal at a particular picture, it adds this value to the current operating temporal level value stored in TL memory 630 and reconfigures the NFC table 620 to reflect the desired new operating temporal level. If the addition results in a negative value, a value of 0 is stored in TL memory 630. When NAL Filtering Unit 610 detects a positive Temporal Level Switch Trigger signal at a particular picture, it first checks the value of the temporal_level_nesting_flag. If the value is 0, then NAL Filtering Unit 610 cannot decide, in the absence of additional application-specific information, if it is possible to switch to the desired higher temporal level and no action is taken. If the value of temporal_level_nesting_flag is 1, then the value of the Temporal Level Switch Trigger signal is added to the TL memory, and the NFC table is reconfigured to reflect the desired new operating level. If the new value of the TL memory is higher than the maximum temporal level present in the bitstream, as reflected in the SSEI, then the TL is set to that maximum temporal level value. It is noted that the maximum temporal level value can be obtained from the SSEI by parsing all the layer information contained in the SSEI, and storing the largest value of the temporal_level[i] syntax element.
  • FIG. 7 shows a flow diagram 700 of the operation of NAL Filtering Unit 610. In flow diagram 700, the legend ‘TRIGGER’ designates the value of Temporal Level Switch Trigger 650 signal of FIG. 6, while ‘TL_MAX’ designates the maximum temporal level value as obtained from the SSEI. Function NFC(T, D, Q) returns the value of NFC 620 for the particular combination of T, D, and Q values.
  • It is noted that in systems where all components are purposefully designed together, it may be possible to make a priori assumptions about the structure of the bitstream. In these cases, temporal level upswitching may be possible if certain criteria are satisfied by the T, D, and Q values. NAL Filtering Unit 610 may be configured to incorporate such criteria when attempting to perform temporal level upswitching, and to also elect to perform the temporal level upswitching at a later picture, where presumably the application-specific conditions will be satisfied.
  • While there have been described what are believed to be the preferred embodiments of the present invention, those skilled in the art will recognize that other and further changes and modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the true scope of the invention.
  • It will be understood that in accordance with the present invention, the techniques described herein may be implemented using any suitable combination of hardware and software. The software (i e., instructions) for implementing and operating the aforementioned rate estimation and control techniques can be provided on computer-readable media, which can include without limitation, firmware, memory, storage devices, microcontrollers, microprocessors, integrated circuits, ASICs, downloadable media, and other available media.

Claims (6)

1. A method for decoding media encoded using scalable coding with temporal scalability at a pre-determined maximum temporal level, the method comprising:
receiving signaling information that is indicative of a temporally nested structure of layers and pertains to all frames of a coded video sequence; and
decoding with a decoding device those parts of the encoded media signal which correspond to temporal levels that are less than or equal to the maximum temporal level,
wherein the signaling information indicates for any three frames picA, picB, picC included in the coded video sequence, that picB is not used for reference of picA:
under a first condition that picB is of a temporal level lower or equal than the temporal level of picA, and
under a second condition that the temporal level of picC is lower than the temporal level of picB, and
under a third condition that picC follows picB in decoding order, and
under a fourth condition that picC precedes picA in decoding order.
2. A method for decoding media encoded using scalable coding with temporal scalability at a pre-determined maximum temporal level, the method comprising:
receiving signaling information that is indicative of a temporally nested structure of layers and pertains to a plurality of frames, and
decoding with a decoding device those parts of the encoded media signal which correspond to temporal levels that are less than or equal to the maximum temporal level,
wherein the scalable coding complies with H.264/SVC, and
wherein the signaling information indicates for any three frames picA, picB, picC included in the coded video sequence, that picB is not used for reference of picA:
under a first condition that picB is of a temporal level lower or equal than the temporal level of picA, and
under a second condition that the temporal level of picC is lower than the temporal level of picB, and
under a third condition that picC follows picB in decoding order, and
under a fourth condition that picC precedes picA in decoding order.
3. A non-transitory computer readable medium comprising a set of executable instructions to direct a processor to perform the method recited in claim 1.
4. A non-transitory computer readable medium comprising a set of executable instructions to direct a processor to perform the method recited in claim 2.
5. A decoding device configured to perform the method in claim 1.
6. A decoding device configured to perform the method in claim 2.
US14/072,638 2006-07-21 2013-11-05 Systems and methods for signaling and performing temporal level switching in scalable video coding Abandoned US20140105309A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/072,638 US20140105309A1 (en) 2006-07-21 2013-11-05 Systems and methods for signaling and performing temporal level switching in scalable video coding
US14/156,243 US8861613B2 (en) 2006-07-21 2014-01-15 Systems and methods for signaling and performing temporal level switching in scalable video coding

Applications Claiming Priority (13)

Application Number Priority Date Filing Date Title
PCT/US2006/028365 WO2008060262A1 (en) 2005-09-07 2006-07-21 System and method for scalable and low-delay videoconferencing using scalable video coding
PCT/US2006/028366 WO2008082375A2 (en) 2005-09-07 2006-07-21 System and method for a conference server architecture for low delay and distributed conferencing applications
PCT/US2006/028367 WO2007075196A1 (en) 2005-09-07 2006-07-21 System and method for a high reliability base layer trunk
US82960906P 2006-10-16 2006-10-16
PCT/US2006/061815 WO2007067990A2 (en) 2005-12-08 2006-12-08 Systems and methods for error resilience and random access in video communication systems
PCT/US2006/062569 WO2007076486A2 (en) 2005-12-22 2006-12-22 System and method for videoconferencing using scalable video coding and compositing scalable video conferencing servers
PCT/US2007/062357 WO2007095640A2 (en) 2006-02-16 2007-02-16 System and method for thinning of scalable video coding bit-streams
PCT/US2007/063335 WO2007103889A2 (en) 2006-03-03 2007-03-05 System and method for providing error resilience, random access and rate control in scalable video communications
PCT/US2007/065003 WO2007112384A2 (en) 2006-03-27 2007-03-27 System and method for management of scalability information in scalable video and audio coding systems using control messages
PCT/US2007/065554 WO2007115133A2 (en) 2006-03-29 2007-03-29 System and method for transcoding between scalable and non-scalable video codecs
PCT/US2007/080089 WO2008042852A2 (en) 2006-09-29 2007-10-01 System and method for multipoint conferencing with scalable video coding servers and multicast
US11/871,612 US8594202B2 (en) 2006-07-21 2007-10-12 Systems and methods for signaling and performing temporal level switching in scalable video coding
US14/072,638 US20140105309A1 (en) 2006-07-21 2013-11-05 Systems and methods for signaling and performing temporal level switching in scalable video coding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/871,612 Continuation US8594202B2 (en) 2006-07-21 2007-10-12 Systems and methods for signaling and performing temporal level switching in scalable video coding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/156,243 Continuation US8861613B2 (en) 2006-07-21 2014-01-15 Systems and methods for signaling and performing temporal level switching in scalable video coding

Publications (1)

Publication Number Publication Date
US20140105309A1 true US20140105309A1 (en) 2014-04-17

Family

ID=39314754

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/871,612 Active 2030-01-28 US8594202B2 (en) 2006-07-21 2007-10-12 Systems and methods for signaling and performing temporal level switching in scalable video coding
US14/072,638 Abandoned US20140105309A1 (en) 2006-07-21 2013-11-05 Systems and methods for signaling and performing temporal level switching in scalable video coding
US14/156,243 Active US8861613B2 (en) 2006-07-21 2014-01-15 Systems and methods for signaling and performing temporal level switching in scalable video coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/871,612 Active 2030-01-28 US8594202B2 (en) 2006-07-21 2007-10-12 Systems and methods for signaling and performing temporal level switching in scalable video coding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/156,243 Active US8861613B2 (en) 2006-07-21 2014-01-15 Systems and methods for signaling and performing temporal level switching in scalable video coding

Country Status (7)

Country Link
US (3) US8594202B2 (en)
EP (1) EP2080275B1 (en)
JP (2) JP2010507346A (en)
CN (2) CN106982382B (en)
AU (1) AU2007311178A1 (en)
CA (2) CA2666601C (en)
WO (1) WO2008048886A2 (en)

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8289370B2 (en) 2005-07-20 2012-10-16 Vidyo, Inc. System and method for scalable and low-delay videoconferencing using scalable video coding
AU2007230602B2 (en) * 2006-03-27 2012-01-12 Vidyo, Inc. System and method for management of scalability information in scalable video and audio coding systems using control messages
JP2010507346A (en) * 2006-10-16 2010-03-04 ヴィドヨ,インコーポレーテッド System and method for implementing signaling and time level switching in scalable video coding
CN101690229A (en) * 2007-06-26 2010-03-31 诺基亚公司 System and method for indicating temporal layer switching points
EP2025674A1 (en) 2007-08-15 2009-02-18 sanofi-aventis Substituted tetra hydro naphthalines, method for their manufacture and their use as drugs
US9445110B2 (en) 2007-09-28 2016-09-13 Dolby Laboratories Licensing Corporation Video compression and transmission techniques
US8243117B2 (en) * 2008-09-26 2012-08-14 Microsoft Corporation Processing aspects of a video scene
US8804821B2 (en) 2008-09-26 2014-08-12 Microsoft Corporation Adaptive video processing of an interactive environment
KR20100071688A (en) * 2008-12-19 2010-06-29 한국전자통신연구원 A streaming service system and method for universal video access based on scalable video coding
KR101188563B1 (en) 2009-05-21 2012-10-05 에스케이플래닛 주식회사 Asymmetric Scalable Downloading Method and System
US8933024B2 (en) 2010-06-18 2015-01-13 Sanofi Azolopyridin-3-one derivatives as inhibitors of lipases and phospholipases
US8530413B2 (en) 2010-06-21 2013-09-10 Sanofi Heterocyclically substituted methoxyphenyl derivatives with an oxo group, processes for preparation thereof and use thereof as medicaments
TW201215388A (en) 2010-07-05 2012-04-16 Sanofi Sa (2-aryloxyacetylamino)phenylpropionic acid derivatives, processes for preparation thereof and use thereof as medicaments
TW201221505A (en) 2010-07-05 2012-06-01 Sanofi Sa Aryloxyalkylene-substituted hydroxyphenylhexynoic acids, process for preparation thereof and use thereof as a medicament
TW201215387A (en) 2010-07-05 2012-04-16 Sanofi Aventis Spirocyclically substituted 1,3-propane dioxide derivatives, processes for preparation thereof and use thereof as a medicament
WO2012096981A1 (en) 2011-01-14 2012-07-19 Vidyo, Inc. Improved nal unit header
US9113172B2 (en) 2011-01-14 2015-08-18 Vidyo, Inc. Techniques for describing temporal coding structure
US10034009B2 (en) 2011-01-14 2018-07-24 Vidyo, Inc. High layer syntax for temporal scalability
KR102125930B1 (en) 2011-02-16 2020-06-23 선 페이턴트 트러스트 Video encoding method and video decoding method
US20120230409A1 (en) * 2011-03-07 2012-09-13 Qualcomm Incorporated Decoded picture buffer management
AU2012225513B2 (en) 2011-03-10 2016-06-23 Vidyo, Inc. Dependency parameter set for scalable video coding
WO2013037390A1 (en) 2011-09-12 2013-03-21 Sanofi 6-(4-hydroxy-phenyl)-3-styryl-1h-pyrazolo[3,4-b]pyridine-4-carboxylic acid amide derivatives as kinase inhibitors
EP2760862B1 (en) 2011-09-27 2015-10-21 Sanofi 6-(4-hydroxy-phenyl)-3-alkyl-1h-pyrazolo[3,4-b]pyridine-4-carboxylic acid amide derivatives as kinase inhibitors
US20130195201A1 (en) * 2012-01-10 2013-08-01 Vidyo, Inc. Techniques for layered video encoding and decoding
US8908005B1 (en) 2012-01-27 2014-12-09 Google Inc. Multiway video broadcast system
US9001178B1 (en) 2012-01-27 2015-04-07 Google Inc. Multimedia conference broadcast system
KR101652928B1 (en) 2012-01-31 2016-09-01 브이아이디 스케일, 인크. Reference picture set(rps) signaling for scalable high efficiency video coding(hevc)
US10205961B2 (en) * 2012-04-23 2019-02-12 Qualcomm Incorporated View dependency in multi-view coding and 3D coding
US9313486B2 (en) 2012-06-20 2016-04-12 Vidyo, Inc. Hybrid video coding techniques
EP4436173A2 (en) 2012-06-25 2024-09-25 Huawei Technologies Co., Ltd. Method for signaling a gradual temporal layer access picture
EP3758376A1 (en) * 2012-06-28 2020-12-30 Saturn Licensing LLC Receiving device and corresponding method
EP2863630A4 (en) * 2012-07-03 2016-03-09 Samsung Electronics Co Ltd Method and apparatus for coding video having temporal scalability, and method and apparatus for decoding video having temporal scalability
US10038899B2 (en) 2012-10-04 2018-07-31 Qualcomm Incorporated File format for video data
US9774927B2 (en) * 2012-12-21 2017-09-26 Telefonaktiebolaget L M Ericsson (Publ) Multi-layer video stream decoding
EP3057330B1 (en) * 2013-10-11 2020-04-01 Sony Corporation Transmission device, transmission method, and reception device
US10178397B2 (en) * 2014-03-24 2019-01-08 Qualcomm Incorporated Generic use of HEVC SEI messages for multi-layer codecs
US10506230B2 (en) * 2017-01-04 2019-12-10 Qualcomm Incorporated Modified adaptive loop filter temporal prediction for temporal scalability support
CR20230153A (en) 2020-05-22 2023-05-16 Ge Video Compression Llc Video encoder, video decoder, methods for encoding and decoding and video data stream for realizing advanced video coding concepts
AU2021276676B2 (en) 2020-05-22 2024-08-22 Bytedance Inc. Scalable nested SEI message handling in video sub-bitstream extraction process
JP7549045B2 (en) 2020-06-09 2024-09-10 バイトダンス インコーポレイテッド Sub-bitstream extraction of multi-layer video bitstreams
CN113259673B (en) * 2021-07-05 2021-10-15 腾讯科技(深圳)有限公司 Scalable video coding method, apparatus, device and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480541B1 (en) * 1996-11-27 2002-11-12 Realnetworks, Inc. Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts
US7085401B2 (en) 2001-10-31 2006-08-01 Infowrap Systems Ltd. Automatic object extraction
US20030123546A1 (en) 2001-12-28 2003-07-03 Emblaze Systems Scalable multi-level video coding
CN1288915C (en) * 2002-01-23 2006-12-06 诺基亚有限公司 Grouping of image frames in video coding
US6898313B2 (en) * 2002-03-06 2005-05-24 Sharp Laboratories Of America, Inc. Scalable layered coding in a multi-layer, compound-image data transmission system
US6646578B1 (en) * 2002-11-22 2003-11-11 Ub Video Inc. Context adaptive variable length decoding system and method
US20050254575A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Multiple interoperability points for scalable media coding and transmission
JP3936707B2 (en) * 2004-05-26 2007-06-27 日本電信電話株式会社 Scalable communication conference system, server device, scalable communication conference method, scalable communication conference control method, scalable communication conference control program, and program recording medium thereof
US7522724B2 (en) 2005-01-07 2009-04-21 Hewlett-Packard Development Company, L.P. System and method of transmission of generalized scalable bit-streams
US20060153295A1 (en) * 2005-01-12 2006-07-13 Nokia Corporation Method and system for inter-layer prediction mode coding in scalable video coding
US7110605B2 (en) * 2005-02-04 2006-09-19 Dts Az Research, Llc Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures
CN101317460A (en) * 2005-10-11 2008-12-03 诺基亚公司 System and method for efficient scalable stream adaptation
US8699583B2 (en) * 2006-07-11 2014-04-15 Nokia Corporation Scalable video coding and decoding
US7991236B2 (en) * 2006-10-16 2011-08-02 Nokia Corporation Discardable lower layer adaptations in scalable video coding
JP2010507346A (en) 2006-10-16 2010-03-04 ヴィドヨ,インコーポレーテッド System and method for implementing signaling and time level switching in scalable video coding
US20080095228A1 (en) * 2006-10-20 2008-04-24 Nokia Corporation System and method for providing picture output indications in video coding

Also Published As

Publication number Publication date
EP2080275A2 (en) 2009-07-22
US8594202B2 (en) 2013-11-26
CA2849697A1 (en) 2008-04-24
CN101573883B (en) 2017-03-01
WO2008048886A3 (en) 2008-10-16
EP2080275A4 (en) 2010-08-18
JP2013128308A (en) 2013-06-27
CA2666601A1 (en) 2008-04-24
US20140133576A1 (en) 2014-05-15
AU2007311178A1 (en) 2008-04-24
CA2666601C (en) 2014-08-05
WO2008048886A9 (en) 2008-08-28
CN106982382B (en) 2020-10-16
US20090116562A1 (en) 2009-05-07
WO2008048886A2 (en) 2008-04-24
EP2080275B1 (en) 2019-03-20
CN106982382A (en) 2017-07-25
CN101573883A (en) 2009-11-04
JP5640104B2 (en) 2014-12-10
US8861613B2 (en) 2014-10-14
JP2010507346A (en) 2010-03-04

Similar Documents

Publication Publication Date Title
US8861613B2 (en) Systems and methods for signaling and performing temporal level switching in scalable video coding
US9270939B2 (en) System and method for providing error resilience, random access and rate control in scalable video communications
EP2005607B1 (en) System and method for management of scalability information in scalable video coding systems using control messages
KR100984693B1 (en) Picture delimiter in scalable video coding
JP5247901B2 (en) Multiple interoperability points for encoding and transmission of extensible media
JP5268915B2 (en) Visual composition management technology for multimedia audio conferencing
JP6309463B2 (en) System and method for providing error resilience, random access, and rate control in scalable video communication
CA2647823A1 (en) System and method for management of scalability information in scalable video and audio coding systems using control messages
JP2009540625A (en) System and method for thinning a scalable video coding bitstream
JP2009540625A6 (en) System and method for thinning a scalable video coding bitstream
JP2023518753A (en) Constraints on reference picture order
AU2012201235B2 (en) Systems and methods for signaling and performing temporal level switching in scalable video coding
CA2763089C (en) System and method for management of scalability information in scalable video and audio coding systems using control messages
Inamdar Performance Evaluation Of Greedy Heuristic For SIP Analyzer In H. 264/SVC

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION