AU2010241332A1 - System and method for jitter buffer reduction in scalable coding - Google Patents

System and method for jitter buffer reduction in scalable coding Download PDF

Info

Publication number
AU2010241332A1
AU2010241332A1 AU2010241332A AU2010241332A AU2010241332A1 AU 2010241332 A1 AU2010241332 A1 AU 2010241332A1 AU 2010241332 A AU2010241332 A AU 2010241332A AU 2010241332 A AU2010241332 A AU 2010241332A AU 2010241332 A1 AU2010241332 A1 AU 2010241332A1
Authority
AU
Australia
Prior art keywords
jitter
buffers
buffer
layer
decoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2010241332A
Inventor
Reha Civanlar
Alexandros Eleftheriadis
Ofer Shapiro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidyo Inc
Original Assignee
Vidyo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vidyo Inc filed Critical Vidyo Inc
Priority to AU2010241332A priority Critical patent/AU2010241332A1/en
Publication of AU2010241332A1 publication Critical patent/AU2010241332A1/en
Priority to AU2013200416A priority patent/AU2013200416A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23406Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving management of server-side video buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42692Internal components of the client ; Characteristics thereof for reading from or writing on a volatile storage medium, e.g. Random Access Memory [RAM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64753Control signals issued by the network directed to the server or the client directed to the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Description

Australian Patents Act 1990- Regulation 3.2 ORIGINAL COMPLETE SPECIFICATION STANDARD PATENT Invention Title "System and method for jitter buffer reduction in scalable coding" The following statement is a full description of this invention, including the best method of performing it known to me/us: P/00/011 C \NRP-hinrrklCC\R irmVR207o i f - O/ -1/A C:\NRPortbl\DCC\KXM\328931-.DOC-/I /2010 SYSTEM AND METHOD FOR JITTER BUFFER REDUCTION IN SCALABLE CODING Cross-Reference to Related Applications 5 This application claims the benefit of United States provisional patent application Serial No. 60/701,110 filed July 20, 2005. Further, this application is related to co-filed United States patent application Serial Nos. [SVCSystem], [SVC], and [base trunk]. All of the aforementioned priority and related applications are hereby incorporated by reference 10 herein in their entireties. The disclosure of the complete specification of Australian Patent Application No. 2006346224, as originally filed, is incorporated herein by reference. Field of the Invention 15 The present invention relates to multimedia and telecommunications technology. In particular, the present invention relates to audio and video data communication systems and specifically to the use of jitter buffers in video encoding/decoding systems. Background of the Invention 20 Data packets/signals (e.g., audio and video signals) transmitted across conventional electronic communication networks (e.g., Internet Protocol ("IP") networks) are subject to undesirable phenomena, which degrade signal integrity or quality. The undesirable phenomena include, for example, variable delay (i.e., each data packet may suffer a 25 different delay, also known as "jitter"), out-of-order reception of sequential packets, and packet loss. In conventional streaming video systems, a network device typically receives multimedia or video packets from a network and stores the packets in a buffer. The buffer allows 30 enough time for out-of-order or delayed packets to arrive. The buffer then may release or feed multimedia/video data at a uniform rate for playback. If a specific data frame is carried in more than one packet, the buffer must allocate sufficient time for all the parts of a particular frame to arrive. Jitter buffers lengths/delays can account for a major part of the overall end-to-end delay in an IP communication system. - 1 - WO 2008/051181 PCT/US2006/028368 Traditionally, a jitter buffer's length (i.e., delay) is adjusted to allow almost all fragments of a frame sufficient time to arrive before the next frame has to be decoded for display. Scalable coding techniques allow a data signal (e.g., audio and/or 5 video data signals) to be coded and compressed for transmission in a multiple-layer format. The information content of a subject data signal is distributed amongst all of the coded multiple layers. Each of the multiple layers or combinations of the layers may be transmitted in respective bitstreams. A "base layer" bitstream, by design, may carry sufficient information for a desired minimum or basic quality level 10 reconstruction, upon decoding, of the original audio and/or video signal. Other "enhancement layer" bitstreams may carry additional information, which can be used to improve upon the basic level quality reconstruction of the original audio and/or video signal. Scalable audio coding (SAC) and video coding (SVC) may be used in 15 audio and/or videoconferencing systems implemented over electronic communications networks. Co-filed United States patent application Serial Nos. [SVCSystem] and [SVC describe systems and methods for scalable audio and video coding for exemplary audio and/or videoconferencing applications. The referenced patents describe particular IP multipoint control units (MCUs), Scalable Audio 20 Conferencing Servers (SACS) and Scalable Video Conferencing Servers (SVCS) that are designed for mediating the transmission of SAC and SVC layer bitstreams between conferencing endpoints. It should be noted that other methods of creating enhancement layers also include: a) complete representation of the high quality signal, without reference 25 to the base layer information, a method also known as 'simulcasting'; or b) two or more representations of the same signal in similar quality but with minimal correlation, where a sub-set of the representations on its own would be considered 'base layer' and the remaining representations would be considered an enhancement. This latter method is also known as 'multiple description coding'. For brevity all these 30 methods are referred to herein as base and enhancement layer coding. Consideration is now being given to improving the design of jitter buffers used in video communication systems. In particular, attention is being 2 WO 2008/051181 PCT/US2006/028368 directed to designing efficient jitter buffers in communication systems that transmit scalable coded video streams. SUMMARY OF THE INVENTION 5 Systems and methods are provided for reducing jitter buffer lengths or delays in video communication systems that transmit scalable coded video streams. The systems and methods of the present invention generally involve deploying a plurality of jitter buffers at receivers/endpoints to separately buffer two or more layers of a received SVC stream. Further, the plurality of jitter buffers may be 10 configured with different delay settings to accommodate, for example, different loss rates of the individual layer streams. In an exemplary embodiment of the present invention, a system for receiving SVC data (e.g., a receiving terminal or endpoint) includes a number of jitter buffers, each of which is designated to buffer a respective one of the layers of a 15 received SVC data stream. The jitter buffers are configured with different lengths/delays in a manner which reduces the delay for the overall system. The receiving terminal/endpoint also includes a decoder that can decode the buffered video data stream layer by layer. The decoder is configured to selectively drop enhancement layer information in a manner which has with minimal impact on 20 displayed video quality but which improves system delay performance. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. lA and 1B are block diagrams illustrating exemplary scalably 25 coded video data receivers, which include jitter buffer arrangements designed in accordance with the principles of the present invention. FIGS. 2 and 3 are error rate graphs, which illustrate the advantages of the jitter buffer arrangements of the present invention. Throughout the figures, the same reference numerals and characters, 30 unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments. 3 WO 2008/051181 PCT/US2006/028368 DETAILED DESCRIPTION OF THE INVENTION Jitter buffer arrangements that are designed to reduce delay in video communication systems are provided. The jitter buffer arrangements may be 5 implemented at video-receiving terminals or communications system endpoints that receive video data streams encoded in multi-layer format, such as scalable coding with a base and enhancement layer. It should be noted that other methods of creating enhancement layers also include simulcasting and multiple description coding, among others, and. for brevity we refer to herein all these methods as base and enhancement 10 layer coding. The jitter buffer arrangements include a plurality of individual jitter buffers, each of which is designated to buffer data packets for a particular layer (or a particular combination of layers) of an incoming video data stream. The jitter buffer arrangements further include or are associated with a decoder, which is designed to 15 decode the buffered data packets individual jitter buffer by individual jitter buffer. FIGS. 1A and lB show exemplary jitter buffer/decoder arrangements 1OOA and 100B that may be incorporated in receiving terminals or endpoints (e.g., endpoints 110 and 120, respectively). Both arrangements 1 OA and 100B are designed to receive, decode, and display video data streams 150 that are scalably 20 coded in a multi-layer format (e.g., as base layer 150A and enhancement layers 150B D). Both arrangements include a plurality of jitter buffers 130 for buffering video packets in the incoming video data streams 150 layer-by-layer. Jitter buffers 130A and 13 OB as shown, for example, include a base jitter buffer corresponding to video stream base layer 1 50A, and jitter buffers 1, 2, and 3 corresponding to video stream 25 enhancement layers 150B-150D, respectively. Both arrangements 1OA and 100B include a decoder 140. In arrangement 100A, decoder 140 precedes jitter buffer 130A so that the incoming video stream layers 150A-D are decoded before buffering. Conversely, in arrangement 100B, decoder 140 succeeds jitter buffer 130B so that video stream layers 150A-D are buffered and then decoded. The outputs of 30 arrangements 100A and 100B may be multiplexed by a multiplexer (e.g., MUX 150) to produce a reconstructed video stream 160 for display. Further, endpoints 110/120 may include suitable jitter buffer management algorithms, which allow for different buffering or waiting times for base 4 WO 2008/051181 PCT/US2006/028368 and enhancement layer video stream packets in their respective buffers. The distribution of the wait times (i.e. jitter buffer lengths/delays) for the different layers may be selected to minimize the overall delay in the system. For example, jitter buffer/decoder arrangements 1OA and 1OOB may be configured to permit the 5 tolerable error rates (i.e., the rate at which late-arriving packets are discarded or considered dropped by the jitter buffer) for the enhancement layers to be higher than the error rate allowed for the base layer. This scheme recognizes that in practice, base layer packets tend to be smaller than enhancement layer packets and are therefore less susceptible to jitter to begin with, and that the base layer packets are in most instances 10 transmitted over better quality links or channels, which are less prone to packet loss and jitter. The values of the jitter buffer lengths/delays and their distribution may be adjusted dynamically in response to network conditions (e.g., loss rates or traffic load) or any other factors. 15 The jitter buffer arrangements of the present invention can significantly reduce overall communication system delays before data contained in a received frame can be displayed or played back. Such reduced delays are desirable quality features in all audio and video communication systems, and particularly in systems operating in real-time such as videoconferencing or audio communications 20 applications. The jitter buffer arrangements of the present invention also advantageously allow the base and enhancement layers, which are buffered separately, to be decoded separately. Receiving endpoints 110/120 may begin decoding any of the base and enhancement layers without waiting for the other layers 25 to arrive. This feature can reduce or minimize the amount of idle time for the decoding CPU or DSP (e.g., decoder 140), thereby increasing its overall utilization. This feature also facilitates the use of multiple CPUs or CPU cores. In accordance with an exemplary embodiment of the present invention, different jitter buffers may be associated with each of the different quality layers in 30 the video stream. Different values may be assigned to different jitter buffer delays or lengths in response to network conditions, so that the likelihood of the timely receipt of the base layer packets related to video frames is very high even as occasional losses of related enhancement layer packets are permitted or tolerated. 5 WO 2008/051181 PCT/US2006/028368 With renewed reference to FIGS. 1 A and I B, arrangement I OA includes a decoder 140, which decodes the incoming video stream layers 150A-150B in parallel, and multiple jitter buffers 13 0A for buffering the respective decoded layer streams. In arrangement 1O0B, decoder 140 performs decoding of the layers, which 5 processes are dependent on each other (i.e. a layer is required to decode another layer). In either arrangement, the operational parameters for a jitter buffer associated with a particular layer of video data may be different from the operational parameters used for the jitter buffers associated with other layers of video data. The operational parameters (e.g., delay or length settings) for the jitter buffers may be suitably 10 selected or adjusted in response to network conditions or to address other concerns for the particular implementation. An exemplary procedure for the selection and assignment of jitter buffer lengths/delays is described herein with reference to an exemplary video system B, which employs scalable video coding, and a contrasting video system A, which 15 does not employ scalable video coding. In either system A or B, a number of transmitted data packets (e.g., three packets) may include all the information related to a given video frame. In system A, all of the transmitted packets are required to display the frame. Assuming that the packets related to the frame have equal but uncorrelated arrival probabilities, then the probability P of obtaining a correct display 20 at a receiver is given by P= (1-p)" where p is the probability that a single packet related to the frame will arrive later than a certain jitter buffer delay d beyond which any late-arriving packets are presumed lost, and n is the number of packets needed for reconstructing the frame. In system A, 25 the number n is the total number of transmitted packets related to the frame. In contrast, in system B, the number n is 1 (i.e., the base layer). Accordingly, the probability P that the frame will be displayed correctly in system B is the fraction (1 p), which is greater than (1 -p)" - the probability that the frame will be displayed correctly in system A. 30 In a design procedure for the selection of suitable jitter buffer lengths/delays for system B, which employs scalable video coding, the probability p may be computed using the error function as a function of jitter buffer delay d under the assumption that the jitter statistics are Gaussian. 6 WO 2008/051181 PCT/US2006/028368 FIG. 2 shows exemplary computed error or frame drop rates (1 -P) for a one to three packet video frame as a function ofjitter buffer length/delay d, which is normalized by a suitable measure of jitter. The suitable measure ofjitter is defined as one standard deviation of packet arrival delays in the network. As seen from FIG. 2, 5 similar frame drop rates can be obtained for both systems A and B by setting the jitter buffer delay d for system B to about 1/3 standard deviation when in contrast the jitter buffer delay d for system A defined above is set at about 1 standard deviation. The similar frame drop rates are obtained in the two systems because system A must wait for receipt of all three packets for proper frame reconstruction and display, while 10 system B, which tolerates loss of enhancement packets, has to wait only for receipt of the base layer. Thus, if system A shows ajitter of 30ms, approximately 10ms of that delay is removed in system B. The reconstruction and display of a video frames in System B without receipt of the enhancement layers is associated with a 'resolution drop rate' (i.e., 15 when base layer packets arrive on time, but enhancement packets arrive late). With reference to FIG 2, assuming that an acceptable base layer drop rate is set at 1%, the resolution drop rate is also at most a few percentage points. In another exemplary implementation of present invention, in response to network conditions, different lengths/delays may be assigned to the different jitter 20 buffers associated with base layer and enhancement layers, respectively. For simplicity in description herein, for example, the base layer frame is assumed to be included in one packet, and all enhancement layer frames are assumed to be included as a frame in a second packet so that there is one corresponding base layer jitter buffer and one corresponding enhancement layer buffer only. In this example, the base layer 25 jitter buffer length may be configured to drop no data or at most a negligible amount of data from the base layer (i.e., to achieve a near zero frame drop rate), which results in acceptable system performance on resolution drop rates. The length/delay for the enhancement layer jitter buffer may be set at twice that for the base layer jitter buffer. Further in this example, the frame drop rates are the same as the packet 30 drop rates as one frame of base or enhancement layer is included in one packet. FIG. 3 is graph, which shows computed frame drop rates as a function of d (normalized to base jitter) for different base and enhancement layer combination scenarios. 7 WO 2008/051181 PCT/US2006/028368 As seen from FIG. 3, a normalized jitter buffer length/delay ratio of about 2.7 corresponds to 1 x 1 0 4 base layer drop rate (e.g., 1 frame dropped every 300 seconds in a 1- 3 packet frame configuration). To obtain the same low error rate in non-layered systems or systems in which the jitter buffer lengths are the same for both 5 base and enhancement layers, the total jitter buffer length/delay would have to be at least double to accommodate the enhancement layer jitter which in this example is twice the base layer jitter. The exemplary implementation of the present invention avoids the introduction of this additional double delay in the video display. While there have been described what are believed to be the preferred 10 embodiments of the present invention, those skilled in the art will recognize that other and further changes and modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the true scope of the invention. For example, the inventive jitter buffer arrangements have been described herein with reference to 15 video data streams encoded in multi-layer format. However, it is readily understood that the inventive jitter buffer arrangements also can be implemented for audio data streams encoded in multi-layer format. It also will be understood that in accordance with the present invention, the jitter buffer and decoder arrangements can be implemented using any suitable combination 20 of hardware and software. The software (i.e., instructions) for implementing and operating the aforementioned jitter buffer and decoder arrangements can be provided on computer-readable media, which can include without limitation, firmware, microcontrollers, microprocessors, integrated circuits, ASICS, on-line downloadable media, and other available media. 8

Claims (19)

1. A jitter buffer arrangement for a receiving endpoint in an electronic communications network, the arrangement comprising: 5 a plurality of jitter buffers wherein each jitter buffer is designated to buffer a particular layer of a received scalably coded data stream, and a decoder coupled to the plurality of jitter buffers, wherein the decoder is configured to decode the received scalably coded data stream layer-by-layer.
2. The jitter buffer arrangement of claim I wherein the scalably coded 10 data stream comprises at least one of a video data stream, an audio data stream or a combination thereof.
3. The jitter buffer arrangement of claim 1 wherein the plurality of jitter buffers precedes the decoder.
4. The jitter buffer arrangement of claim I wherein the plurality of jitter 15 buffers succeeds the decoder and buffers the decoder output layer-by-layer.
5. The jitter buffer arrangement of claim I wherein the plurality of jitter buffers each has a design length, and wherein at least two jitter buffers have different design lengths.
6. The jitter buffer arrangement of claim 1 wherein the plurality of jitter 20 buffers each has a length, which is adjusted dynamically in response to network conditions.
7. The jitter buffer arrangement of claim I wherein a first and second jitter buffers are designated to buffer a base layer and an enhancement layer, respectively. 25
8. The jitter buffer arrangement of claim 7 wherein the design lengths of the first and second jitter buffers are based on a statistical estimate of jitter in video streams received over the network.
9. The jitter buffer arrangement of claim 7 wherein the design length of the first buffer is a fraction of the design length of the second jitter buffer. 30
10. A method for managing jitter buffer delay at a receiving endpoint in an electronic communications network, the method comprising: providing a plurality of jitter buffers, wherein each jitter buffer is designated to buffer a particular layer of a received scalably coded data stream, and 9 WO 2008/051181 PCT/US2006/028368 coupling a decoder to the plurality of jitter buffers, wherein the decoder is configured to decode the received scalably coded data stream layer-by layer.
11. The method of claim 10 wherein the scalably coded data stream 5 comprises at least one of a video data stream, an audio data stream and a combination thereof.
12. The method of claim 10 wherein the plurality of jitter buffers precedes the coupled decoder.
13. The method of claim 10 wherein the plurality ofj itter buffers succeeds 10 the coupled decoder, the method further comprising buffering the decoder output layer-by-layer.
14. The method of claim 10 wherein providing a plurality of jitter buffers comprises providing a plurality ofjitter buffers each having a design length, and wherein at least two jitter buffers have different design lengths.
15 15. The method of claim 10 wherein providing a plurality ofjitter buffers comprises providing a plurality of jitter buffers each having a design length, and further comprises adjusting the design lengths dynamically in response to network conditions.
16. The method of claim 10 wherein providing a plurality of jitter buffers 20 comprises providing a first and a second jitter buffer designated to buffer a base layer and an enhancement layer, respectively.
17. The method of claim 16 further comprising assigning the design lengths for the jitter buffers based on a statistical estimate of jitter in transmitted video streams on the network. 25
18. The method of claim 16 further comprising assigning a design length to the first buffer, which is a fraction of the design length assigned to the second jitter buffer.
19. Computer readable media comprising a set of instructions to perform the steps recited in at least one of claims 10-18. 10
AU2010241332A 2005-07-20 2010-11-09 System and method for jitter buffer reduction in scalable coding Abandoned AU2010241332A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2010241332A AU2010241332A1 (en) 2005-07-20 2010-11-09 System and method for jitter buffer reduction in scalable coding
AU2013200416A AU2013200416A1 (en) 2005-07-20 2013-01-25 System and method for jitter buffer reduction in scalable coding

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US70111005P 2005-07-20 2005-07-20
US60/701,110 2005-07-20
PCT/US2006/028368 WO2008051181A1 (en) 2006-07-21 2006-07-21 System and method for jitter buffer reduction in scalable coding
AU2006346224A AU2006346224A1 (en) 2005-07-20 2006-07-21 System and method for jitter buffer reduction in scalable coding
AU2010241332A AU2010241332A1 (en) 2005-07-20 2010-11-09 System and method for jitter buffer reduction in scalable coding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2006346224A Division AU2006346224A1 (en) 2005-07-20 2006-07-21 System and method for jitter buffer reduction in scalable coding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2013200416A Division AU2013200416A1 (en) 2005-07-20 2013-01-25 System and method for jitter buffer reduction in scalable coding

Publications (1)

Publication Number Publication Date
AU2010241332A1 true AU2010241332A1 (en) 2010-12-02

Family

ID=39325574

Family Applications (2)

Application Number Title Priority Date Filing Date
AU2006346224A Abandoned AU2006346224A1 (en) 2005-07-20 2006-07-21 System and method for jitter buffer reduction in scalable coding
AU2010241332A Abandoned AU2010241332A1 (en) 2005-07-20 2010-11-09 System and method for jitter buffer reduction in scalable coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
AU2006346224A Abandoned AU2006346224A1 (en) 2005-07-20 2006-07-21 System and method for jitter buffer reduction in scalable coding

Country Status (7)

Country Link
US (1) US20080159384A1 (en)
EP (1) EP2044710A4 (en)
JP (1) JP4967020B2 (en)
CN (1) CN101366213A (en)
AU (2) AU2006346224A1 (en)
CA (1) CA2615352C (en)
WO (1) WO2008051181A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7773633B2 (en) * 2005-12-08 2010-08-10 Electronics And Telecommunications Research Institute Apparatus and method of processing bitstream of embedded codec which is received in units of packets
AU2007214423C1 (en) 2006-02-16 2012-03-01 Vidyo, Inc. System and method for thinning of scalable video coding bit-streams
EP2124447A1 (en) * 2008-05-21 2009-11-25 Telefonaktiebolaget LM Ericsson (publ) Mehod and device for graceful degradation for recording and playback of multimedia streams
US8503458B1 (en) * 2009-04-29 2013-08-06 Tellabs Operations, Inc. Methods and apparatus for characterizing adaptive clocking domains in multi-domain networks
US8428122B2 (en) * 2009-09-16 2013-04-23 Broadcom Corporation Method and system for frame buffer compression and memory resource reduction for 3D video
JP5443918B2 (en) * 2009-09-18 2014-03-19 株式会社ソニー・コンピュータエンタテインメント Terminal device, audio output method, and information processing system
GB2488159B (en) * 2011-02-18 2017-08-16 Advanced Risc Mach Ltd Parallel video decoding
GB201109519D0 (en) * 2011-06-07 2011-07-20 Nordic Semiconductor Asa Streamed radio communication
WO2013063316A1 (en) 2011-10-25 2013-05-02 Daylight Solutions, Inc. Infrared imaging microscope
US8908005B1 (en) 2012-01-27 2014-12-09 Google Inc. Multiway video broadcast system
US9001178B1 (en) 2012-01-27 2015-04-07 Google Inc. Multimedia conference broadcast system
US9258522B2 (en) 2013-03-15 2016-02-09 Stryker Corporation Privacy setting for medical communications systems
CA2908850C (en) 2013-04-08 2018-03-27 Arris Technology, Inc. Individual buffer management in video coding
EP2903289A1 (en) * 2014-01-31 2015-08-05 Thomson Licensing Receiver for layered real-time data stream and method of operating the same
US10205949B2 (en) 2014-05-21 2019-02-12 Arris Enterprises Llc Signaling for addition or removal of layers in scalable video
CA3083172C (en) 2014-05-21 2022-01-25 Arris Enterprises Llc Individual buffer management in transport of scalable video
US10601689B2 (en) 2015-09-29 2020-03-24 Dolby Laboratories Licensing Corporation Method and system for handling heterogeneous jitter
EP3220603B1 (en) 2016-03-17 2019-05-29 Dolby Laboratories Licensing Corporation Jitter buffer apparatus and method
US10812401B2 (en) 2016-03-17 2020-10-20 Dolby Laboratories Licensing Corporation Jitter buffer apparatus and method
US10904540B2 (en) * 2017-12-06 2021-01-26 Avago Technologies International Sales Pte. Limited Video decoder rate model and verification circuit

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5515377A (en) * 1993-09-02 1996-05-07 At&T Corp. Adaptive video encoder for two-layer encoding of video signals on ATM (asynchronous transfer mode) networks
US5495291A (en) * 1994-07-22 1996-02-27 Hewlett-Packard Company Decompression system for compressed video data for providing uninterrupted decompressed video data output
JP3788823B2 (en) * 1995-10-27 2006-06-21 株式会社東芝 Moving picture encoding apparatus and moving picture decoding apparatus
JPH10313315A (en) * 1997-05-12 1998-11-24 Mitsubishi Electric Corp Voice cell fluctuation absorbing device
JP3795183B2 (en) * 1997-05-16 2006-07-12 日本放送協会 Digital signal transmission method, digital signal transmission device, and digital signal reception device
JP4499204B2 (en) * 1997-07-18 2010-07-07 ソニー株式会社 Image signal multiplexing apparatus and method, and transmission medium
US6434606B1 (en) * 1997-10-01 2002-08-13 3Com Corporation System for real time communication buffer management
US6842724B1 (en) * 1999-04-08 2005-01-11 Lucent Technologies Inc. Method and apparatus for reducing start-up delay in data packet-based network streaming applications
JP2000358243A (en) * 1999-04-12 2000-12-26 Matsushita Electric Ind Co Ltd Image processing method, image processing unit and data storage medium
US7133449B2 (en) * 2000-09-18 2006-11-07 Broadcom Corporation Apparatus and method for conserving memory in a fine granularity scalability coding system
JP2003115818A (en) * 2001-10-04 2003-04-18 Nec Corp Device and method for multiplexing hierarchy
KR100436759B1 (en) * 2001-10-16 2004-06-23 삼성전자주식회사 Multimedia data decoding apparatus capable of optimization capacity of buffers therein
US7483487B2 (en) * 2002-04-11 2009-01-27 Microsoft Corporation Streaming methods and systems

Also Published As

Publication number Publication date
CN101366213A (en) 2009-02-11
WO2008051181A1 (en) 2008-05-02
EP2044710A4 (en) 2012-10-10
AU2006346224A1 (en) 2008-05-02
JP2009545204A (en) 2009-12-17
CA2615352A1 (en) 2007-01-20
CA2615352C (en) 2013-02-12
JP4967020B2 (en) 2012-07-04
EP2044710A1 (en) 2009-04-08
US20080159384A1 (en) 2008-07-03

Similar Documents

Publication Publication Date Title
CA2615352C (en) System and method for jitter buffer reduction in scalable coding
Stockhammer et al. Streaming video over variable bit-rate wireless channels
Maglaris et al. Performance models of statistical multiplexing in packet video communications
US8619865B2 (en) System and method for thinning of scalable video coding bit-streams
US7477688B1 (en) Methods for efficient bandwidth scaling of compressed video data
US8094556B2 (en) Dynamic buffering and synchronization of related media streams in packet networks
EP2011332B1 (en) Method for reducing channel change times in a digital video apparatus
De Cuetos et al. Adaptive rate control for streaming stored fine-grained scalable video
AU2007214423C1 (en) System and method for thinning of scalable video coding bit-streams
US20060088094A1 (en) Rate adaptive video coding
AU2006330074A1 (en) System and method for a high reliability base layer trunk
CN102395027A (en) System and method for transferring multiple data channels
US10033658B2 (en) Method and apparatus for rate adaptation in motion picture experts group media transport
Huang et al. Adaptive live video streaming by priority drop
Lehman et al. Experiments with delivery of HDTV over IP networks
US20080159180A1 (en) System and method for a high reliability base layer trunk
Lei et al. Adaptive video transcoding and streaming over wireless channels
AU2013200416A1 (en) System and method for jitter buffer reduction in scalable coding
EP1781035A1 (en) Real-time scalable streaming system and method
Luo et al. A multi-buffer scheduling scheme for video streaming
Ramaboli et al. MPEG video streaming solution for multihomed-terminals in heterogeneous wireless networks
Wagner et al. Playback delay and buffering optimization in scalable video broadcasting
Hong et al. QoS control for internet delivery of video data
JP2001148717A (en) Data server device
Gao et al. Real-Time scheduling for scalable video coding streaming system

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted