US20100034289A1 - Video Aware Traffic Management - Google Patents

Video Aware Traffic Management Download PDF

Info

Publication number
US20100034289A1
US20100034289A1 US12/511,765 US51176509A US2010034289A1 US 20100034289 A1 US20100034289 A1 US 20100034289A1 US 51176509 A US51176509 A US 51176509A US 2010034289 A1 US2010034289 A1 US 2010034289A1
Authority
US
United States
Prior art keywords
video
packet
packets
circuitry
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/511,765
Inventor
Taeho Kim
Frederick Skoog
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel USA Sourcing Inc
Original Assignee
Alcatel USA Sourcing Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel USA Sourcing Inc filed Critical Alcatel USA Sourcing Inc
Priority to US12/511,765 priority Critical patent/US20100034289A1/en
Publication of US20100034289A1 publication Critical patent/US20100034289A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames

Definitions

  • This invention relates in general to network communications and, more particularly, to a method and apparatus for discarding packets.
  • packets of data may be lost for a variety of reasons. Some packets are randomly lost due to uncontrollable errors—for example, errors caused by noise on a transmission line, synchronization issues, etc. Some packets are lost due to congestion, i.e., it is not possible for a network element to transmit all received packets in a timely manner.
  • Current discard mechanisms for IP QoS (quality of service) algorithms implement random selection schemes to determine which packets to discard without regard to the relative effect on the eventual output.
  • missing packets cause the destination device to request a retransmission of the missing information. This is not very feasible, however, in a network that has multicasting of real-time streams such as audio or video. Normally, there will not be enough time available for requesting and receiving the retransmitted packets, unless buffers at the destination device are very large.
  • the destination device waits for a certain amount of time before declaring a packet as lost.
  • some decoders may request retransmission, other decoders may correct the problem to the extent possible by error concealment techniques.
  • Error concealment techniques will in most cases result in degradation of output quality and are incapable of correcting some errors; further, the degree of the output error will be different depending upon the type of data in the lost packet, some of which will be more difficult to conceal than others. Thus, if packets must be discarded, some types of packets will be better candidates for discarding than others.
  • a receiver for generating an video output from a stream of data packets comprises circuitry for generating video frames from the packets and circuitry for decoding the stream of packets into a video signal, where the decoding circuitry includes circuitry for concealing errors due to missing frames of a first type.
  • the receiver selectively conceals the error or requests retransmission, based on whether the missing packet is of said first type.
  • This aspect of the present invention provides for superior receiving performance by concealing errors due to missing or corrupt low priority video frames and requesting retransmission only when high priority video frames are missing or corrupt.
  • FIG. 1 illustrates a block diagram of a IP video delivery system
  • FIG. 2 illustrates a block diagram of a multiplexer of FIG. 1 ;
  • FIG. 3 illustrates how congestion can occur at the multiplexer because of aggregated data rates can exceed expected average aggregated data rates
  • FIG. 4 illustrates a block diagram of a first embodiment of a multiplexer
  • FIG. 5 illustrates a diagram of a fragmented video frame
  • FIG. 6 illustrates a flow chart describing operation of queue entry logic for the multiplexer of FIG. 4 ;
  • FIG. 7 illustrates a flow chart describing operation of the dequeue logic for the multiplexer of FIG. 4 ;
  • FIG. 8 illustrates a flow chart describing operation of channel change logic for the multiplexer of FIG. 4 ;
  • FIG. 9 illustrates a block diagram of a second embodiment of a multiplexer
  • FIG. 10 illustrates a flow chart describing the operation of an enqueue microblock for the multiplexer of FIG. 9 ;
  • FIG. 11 illustrates a flow chart describing the operation of an dequeue microblock for the multiplexer of FIG. 9 ;
  • FIG. 12 illustrates a flow chart describing the operation of an channel change logic for the multiplexer of FIG. 9 ;
  • FIGS. 13 through 21 illustrate an example of the operation of the multiplexer of FIG. 9 ;
  • FIG. 22 illustrates a state diagram showing operation for a receiver that selectively corrects errors by requesting retransmission or by error recovery techniques.
  • FIGS. 1-22 of the drawings like numerals being used for like elements of the various drawings.
  • FIG. 1 shows a block diagram of an IP video network 10 for sending video programming to a site 12 .
  • Sources such as video head ends, or VHEs
  • VHEs video head ends
  • the IP video receivers 22 translate the video packets to video for video monitors 24 .
  • the data must pass through a public/private network 26 which may include a plurality of routers, including edge router 28 .
  • the output of edge router 28 is received by multiplexer 30 (which could be, for example, a DSLAM access element), where the data for multiple video channels is multiplexed onto twisted pair lines 31 .
  • a modem 32 (such as a DSL modem) on the user site communicates between the multiplexer 30 and the IP video receivers 22 through on-site router 34 .
  • the VHE sources 20 stream video information to the IP video receivers 22 .
  • the video data is typically sent as a multicast transmission.
  • on-demand video unicast transmission may be used.
  • on-demand video generally has a longer buffer, since the delay from source 20 to viewing is not important as broadcast video servers and, thus, on-demand video has a lower priority than live broadcast video services.
  • the site 12 may have several IP video receivers 22 each receiving multiple streams of programming. For example, each IP video receiver 22 could receive two video data streams. If there were three IP video receivers 22 in the site 12 , and each receiver 22 was receiving two video streams, then the link 31 between the multiplexer 30 and the modem 32 would be carrying video packets for six different data streams.
  • Modern day video protocols compress the video stream by periodically sending a full frame (compressed) of video data, followed by differential frames which indicate the changes between frames, rather than the frame itself. Accordingly, a scene which has a rapidly changing image will require a higher bandwidth than a frame that is relatively still.
  • the total available bandwidth between the video heads 20 and the IP receivers 22 for a site 12 is generally fixed by the bandwidth of link 31 , in view of the technology used by the multiplexer 30 and modem 32 .
  • the number of data streams supported by the link 31 is determined by an average bandwidth for each received channel (link 31 can be carrying other data traffic such as Internet traffic, which has a lower priority than the live video data streams (lowest priority) and voice (VOIP—voice over Internet protocol, which generally has the highest priority).
  • link 31 can be carrying other data traffic such as Internet traffic, which has a lower priority than the live video data streams (lowest priority) and voice (VOIP—voice over Internet protocol, which generally has the highest priority).
  • VOIP voice over Internet protocol
  • FIG. 2 illustrates a block diagram of the multiplexer 30 supporting N different data streams.
  • N For a system designed to provide viewing up to two data streams over three receivers 22 , N would equal six.
  • An input stage 40 receives various video streams and forwards packets to a FIFO (first in, first out) memories 42 (alternatively, multiple FIFOs could be used for respective data streams).
  • An output stage 44 multiplexes packets from the FIFO memory 42 onto the link 31 (via DSL scheduling circuitry, not shown).
  • router 34 directs packets to the proper receiver 22 .
  • Traffic Management System 46 controls the multiplexing of the packets from memories 42 onto the link 31 , as described in greater detail below.
  • the congestion problem is illustrated in FIG. 3 .
  • the traffic management system 46 must make intelligent decisions about which packets to discard to minimize any adverse effects on data service to the end user.
  • data packets come from Source A and Source B.
  • Each source implements a policy to provide data at a known average rate.
  • the data from the two sources must be merged onto link 31 , which has a capacity to accommodate the combined average data rates.
  • Limited buffering is available from the FIFO memories 42 ; however, it is desirable to keep the FIFO memories as small as possible; otherwise a noticeable delay will occur when live video channels are switched.
  • the multiplexer 30 has the memory capacity to buffer additional packets, it may need to drop packets because of timing considerations associated with its FIFO 42 . For example, a multiplexer 30 may have a requirement that all packets will be sent within 200 ms of receiving that packet. If the condition cannot be met for an incoming packet, the multiplexer will either need to not place the incoming packet in the FIFO 42 or drop packets already in the FIFO 42 .
  • the multiplexer 30 is designed to minimize the effect of dropping packets.
  • a critical aspect of the problem is that all packets are time-critical. For each data stream all packets are generated on a single server (VHE 20 ). Once generated, each packet has a strict “use by” time. A packet that becomes stale in transit to the end user becomes unusable. To conserve shared link bandwidth, stale packets must be discarded without being transmitted over the link 31 .
  • multiplexer 30 conforms to a policy that requires the minimum degradation of service to the end user when packets are discarded. This goal is accomplished in two basic ways: (1) the multiplexer 30 discards the minimum amount of data necessary to avoid congestion and (2) the multiplexer 30 makes use of a priority scheme to ensure the least useful packets are preferentially discarded.
  • FIG. 4 illustrates a more detailed block diagram of the multiplexer 30 of FIG. 2 , showing an embodiment which makes use of packets containing priority indicators. It is assumed that the priority indicators are generated by the video head end 20 . For the illustrated embodiment, a two bit priority (four possible priority values) is used with “00” binary being the lowest priority and “11” binary being the highest priority.
  • the traffic management system 46 is split into queue entry logic 50 , dequeue logic 52 , channel change logic 54 and forward prediction logic 56 .
  • Each priority level has a threshold level in the FIFO 42 , i.e., a P00 (“Priority 00”) threshold, a P01 threshold, a P10 threshold and a P11 threshold. Additionally, there is an Initial Hold-off threshold. When a threshold level is exceeded, a flag is set (a “P00 FG” notation is used to represent the flag from priority “00”). It is assumed that the thresholds are based on a time-to-dequeue statistic.
  • the P00 threshold is set to 50 msec, it is exceeded if there are packets in the queue which will not be dequeued within 50 msec. Since there may be packets in the FIFO 42 that will not be transmitted, the physical location of a packet may not be indicative of whether a threshold level has been exceeded.
  • a single FIFO 42 is used for multiple channels (multiple data streams).
  • the low priority flags, P00 FG and P01 FG are maintained on a global basis, i.e., one flag is used to indicate that a packet has exceeded a threshold, regardless of the channel associated with that packet.
  • the higher priority flags, P10 FG and P11 FG are maintained on a per channel basis; for example, if a packet on channel “1” exceeds the “10” threshold, the P10 flag is set for channel “1”, but not for channel “2” (in the illustrated embodiment, only two channels are shown, although an actual embodiment may support more channels).
  • FIG. 5 illustrates the association between video frames and packets.
  • information is typically passed as Ethernet packets 58 .
  • Some video frames will be larger than an Ethernet packet 58 and, hence, must be fragmented into multiple packets.
  • the receiver 22 will then group the packets back into video frames for decoding.
  • any packet of a video frame is discarded, then surrounding Ethernet frames are inspected by the traffic management system 46 ; for any frame in which a packet has been discarded, any remaining packets associated with that frame will be discarded as well, since these packets will have no value to the receivers 22 .
  • Ethernet frames occasionally are received out-of-order, and therefore the traffic management system 46 should search a sufficient distance from a discarded packet to ensure that all associated Ethernet frames have been properly inspected.
  • FIG. 6 illustrates a flow chart describing the operation of the queue entry logic 50 .
  • the steps in FIG. 6 indicate the operation of the queue entry logic 50 for each packet that is received.
  • step 60 it is determined whether the initial hold-off threshold has been met. Until the hold-off threshold is met, no packets are dropped, even if the other priority thresholds have been exceeded. Once the initial hold-off threshold is met, subsequent packets will be checked to see if the priority thresholds are exceeded.
  • the queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P00 being exceeded.
  • step 64 the P00 flag is set in step 64 and queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P01 being exceeded in step 66 . If the P01 threshold is not exceeded in step 66 , the queue entry logic 50 determines whether the packet is a P00 packet in step 68 . If so, it is discarded in step 70 .
  • step 66 the P01 threshold is exceeded, then the P01 flag is set in step 72 .
  • the queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P10 for the associated channel being exceeded in step 74 . If the priority threshold P10 threshold for the channel is not exceeded in step 74 , the queue entry logic 50 determines whether the packet is a P00 or a P01 packet in step 76 . If so, it is discarded in step 70 .
  • step 74 the P10 threshold is exceeded, then the P10 flag is set in step 78 .
  • Queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P11 for the associated channel being exceeded in step 80 . If the priority threshold P11 threshold for the channel is not exceeded in step 80 , the queue entry logic 50 determines whether the packet is a P00, a P01 or a P10 packet in step 82 . If so, it is discarded in step 70 .
  • step 80 If in step 80 , the P11 threshold is exceeded, then the P11 flag is set in step 84 .
  • Queue entry logic 50 determines whether the FIFO 42 is full in step 86 . If so, the packet is discarded in step 70 . If the FIFO is not full, then the queue entry logic 50 determines whether the packet is a P11 packet in step 88 . If not, it is discarded in step 70 .
  • step 62 If the P00 threshold is not exceeded in step 62 or if the packet is determined not to be a P00 packet in step 68 , or not to be a P00/P01 packet in step 76 or not to be a P00/P01/P10 packet in step 82 , or is determined to be a P11 packet in step 88 , then it is checked to see if it is a fragment of a frame which has had packets previously discarded in step 92 . If so, it is discarded in step 70 ; if not, it is added to the queue in step 94 .
  • the queue entry logic 50 determines whether it is a fragment of a larger frame in step 96 . If so, the frame ID is saved in step 98 to match with other fragments from the same frame.
  • the flags are reset upon receiving n packets during which the condition for setting the flag no longer exists.
  • the value n is a configurable value.
  • FIG. 7 illustrates flowchart describing operation of the dequeue logic 52 .
  • the dequeue logic 52 gets the next packet at the head of the FIFO 42 in step 100 checks to see if the next packet in line for output from FIFO 42 is a packet associated with a previously discarded frame in step 102 . If so, it is discarded (not output) in step 104 .
  • the dequeue logic 52 determines whether the P00 flag is set; if not, the packet is dequeued (sent to the DSL scheduler for transmission on twisted pair lines 31 ) in step 108 . After the packet is dequeued, there may be packets at the front of the queue which are not transmit eligible. The dequeue logic 52 will discard these packets until the next transmit eligible packet appears. The dequeue logic 52 then waits for the next request from the DSL scheduler.
  • the packet will be discarded if it is a P00 packet (step 112 ). If the packet has a priority higher than P00 in step 112 , the dequeue logic 52 will determine whether the P01 flag is set in step 114 . If the P01 flag is set in step 114 , then the packet will be discarded if it is a P01 packet (step 116 ). If the P01 flag is not set in step 114 , the packet will be dequeued (step 108 ). If it is higher then a P01 packet in step 116 , the dequeue logic 52 will determine whether the P10 flag is set (for the channel associated with the packet) in step 118 .
  • the packet will be discarded if it is a P10 packet (step 120 ). If the P01 flag is not set in step 114 , the packet will be dequeued (step 108 ). If the packet has a priority higher P10 packet in step 120 , the dequeue logic 52 will determine whether the P11 flag is set (for the channel associated with the packet) in step 122 . If the P11 flag for the channel is set in step 122 , then the packet will be discarded. If the P11 flag is not set, the packet will be dequeued.
  • FIG. 8 illustrates a flow chart describing the operation of the channel change logic 54 .
  • the channel change logic 54 works with the dequeue logic 52 to remove packets associated with the “from” channel.
  • the channel change logic 54 also removes low priority packets associated with the “to” channel since these packets will be associated with differential frames (i.e., frames dependent upon other video frames) and therefore useless to the receivers 22 . If another receiver 22 remained tuned to the “to” channel, these packets would not be dropped.
  • step 130 the next packet is taken from the head of the FIFO 42 . If it is associated with the “from” channel in step 132 , it is discarded in step 134 . If it is not associated with the “from” channel, but is associated with the “to” channel in step 136 , the packet is discarded if it is a low priority packet (P00 or P01) in step 138 . If it is a high priority packet in step 138 , then the channel-change clearing process is complete in step 140 .
  • P00 or P01 low priority packet
  • the forward prediction logic 56 receives information regarding upcoming packets that have not yet been received.
  • the forward prediction logic can therefore make estimation of when additional space will be needed to accommodate packets of a specified priority. Accordingly, the discarding functions described above can be made prior to actual congestion occurring.
  • FIFO thresholds are used to keep packets from entering the queue based on the threshold exceeded and a priority associated with the packet.
  • This embodiment provides a method of passing high priority packets using a minimum amount of computation resources.
  • a second embodiment is described below which operates in a different manner. When the buffer is full (i.e., when new packets will not reach the end of the FIFO buffer within the predetermined time limit), packets within the FIFO are marked for discard within the FIFO. When these marked packets reach the head of the FIFO, they are simply not passed forward for transfer over link 31 .
  • FIGS. 9-19 illustrate a second embodiment of a multiplexer 30 where the access network equipment can recognize video traffic in order to make dropping decisions during congested periods to help minimize the service quality degradation.
  • Some types of video packets are more important for the reproduction of high quality video than other types of video packets.
  • the video packet discard decision is made upon video packet type and current congestion level such that least important packets (those that have the least impact on picture quality) are dropped first and the next least important packets are dropped next, and so on.
  • packets already in the video queue can be marked for discard. Discarding enqueued packets results in faster video queue space recovery and can contribute to faster channel change support.
  • the video data streams are generated by an encoder or video server 20 that follows a set of rules, or a protocol, for transport of compressed video content on an IP packet network. More than one protocol definition may be accommodated.
  • the protocols define how packet headers are assembled, and how video content priority indicators are coded at the application layer.
  • the embodiment further assumes that the video data transport data rate is within the range defined for a particular network implementation.
  • a maximum packet size is defined at the application layer such that fragmentation at the lower layers will never be required. This is to ensure that every video packet entering the access node contains the video component priority indicators.
  • a per subscriber pseudo video queue (buffer) 150 includes per-priority index lists (PILs) 152 (with one priority index list 152 for each priority level—in the illustrated embodiment, there are only two priority levels, P0 and P1) and a forward/drop list (FDL) 154 .
  • a video metadata buffer 156 has entries containing the packet metadata for each packet enqueued in the physical video packet buffer 158 .
  • the pseudo video buffer 150 , video metadata buffer 156 and physical video packet buffer 158 are coupled between the enqueue microblock 160 and the dequeue microblock 162 .
  • the physical video buffer stores packets identified by the priority of the task (P0 or P1) and the data stream (channel) associated with the packet (S0 or S1). In an actual embodiment, there could be additional priority levels and more data streams would likely be supported.
  • enqueue microblock 160 and dequeue microblock 162 control the flow of packets into and out of the multiplexer 30 and maintain the contents of the pseudo video queue 150 and video metatdata buffer 156 .
  • video packets are received, they are stored in the physical video buffer 158 .
  • Each packet in the physical video buffer 158 has its metadata stored in an associated entry of the video metadata buffer 156 .
  • the metadata information is used for further packet processing. If multiple receivers 22 are subscribed to the same channel, multiple metadata will exist in the video metadata buffer 156 for the same video stream.
  • the video metadata buffer 156 is preferably a FIFO queue of a predetermined finite depth that maintains the metadata in the order of the video packets.
  • the pseudo video buffer 150 is used to identify and mark packets for discard.
  • the pseudo video buffer 150 uses circular buffers as its main data structure with head and tail pointers such that a new buffer entry is added at the tail and buffer entries are removed from the head. As shown below, this circular list data structure provides a simple mechanism to maintain the list.
  • the pseudo video buffer 150 includes a forward/drop list 154 and an index list 152 for each priority type.
  • Each entry in the forward/drop list 154 is associated with an entry in the video metadata buffer 156 .
  • the contents of each entry in the forward/drop list 154 is either an indicator of the data stream (either “0” or “1” in the illustrated embodiment), if the packet is to be forwarded, or a discard marker (“D”) to indicate that the associated packet is to be dropped.
  • Each priority index list 152 maintains an index of packets by priority. By maintaining a separate list of packet for each priority, packets or metadata of a certain priority can be easily located for marking without scanning the entire queue.
  • FIG. 10 is a flow chart that illustrates the operation of the enqueue microblock 160 .
  • the enqueue microblock 160 determines whether the physical video buffer can accommodate the incoming packet in step 172 . If so, the enqueue microblock 160 extracts packet information (such as priority, video stream ID, and so on) from the incoming packet and the packet is enqueued by inserting the packet's video stream ID in the forward/drop list 154 (step 174 ) and adds the forward/drop list index of the packet to the appropriate index list 152 , depending on the priority information from the metadata (step 176 ). In step 178 , the remaining queue level (QLevel) is adjusted to account for the newly enqueued video packet.
  • packet information such as priority, video stream ID, and so on
  • the enqueue microblock looks at a index list 152 associated with lower priority packets (i.e., if the incoming packet is a P1 packet, the P0 index list will be used to determine whether there are lower priority packets in the physical video queue 158 ). If the appropriate index list 152 is empty in step 180 , the incoming packet is discarded (not enqueued) in step 182 . On the other hand, if the appropriate index list 152 is not empty in step 180 , the lower priority packets designated in the index list 152 are marked for discard in the forward/drop list 154 to create additional space in the physical video buffer in steps 184 - 188 .
  • the enqueue process will only mark packets for discard until enough room is recovered to enqueue the incoming packet.
  • a packet is identified by an entry from the appropriate index list 152 ; the index in that entry points to a corresponding entry in the forward/drop list. That entry is marked for discard (by a “D” in the illustrated embodiment) in step 186 . The entry is then deleted from the index list 152 and the queue level is adjusted to account for the discarded packet.
  • Control continues at step 172 , where it is determined whether the queue has room for the incoming packet after discarding the packet. If so, the incoming packet is enqueued in steps 174 - 178 .
  • the index lists are again checked for lower priority packets within the queue. The process is repeated until either enough room is obtained by discarding lower priority packets or, if no more room can be created, by discarding the incoming packet.
  • step 190 a send request is received by the dequeue microblock 162 from the DSL transmit scheduler. If the forward/discard list (FDL) 154 is empty in step 192 , then there are no packets to send at the current time. On the other hand, if the forward/discard list 154 indicates that there are packets to sent in step 192 , then the next entry in the forward discard list 154 for the next packet to sent is dequeued in step 194 , as well as the corresponding entry from the index list 192 .
  • FDL forward/discard list
  • step 198 if the entry from the forward/discard list indicates that the packet has been marked for discard, then control returns to step 192 to look at the next packet in the physical video buffer 158 . On the other hand, if the entry does not indicate that the packet is not marked for discard, then the metadata for the packet is retrieved in step 200 and the packet is forwarded in step 202 .
  • FIG. 12 illustrates a flow chart describing the operation of channel change logic.
  • Fast channel change support can be accomplished by discarding the packets in the video buffer as soon as possible that are related to the previous video channel to make room for new video stream.
  • a scan index (scanidx) is set to the head of the forward/discard list in step 212 and the video channel ID for the “from” channel is retrieved in step 214 .
  • the forward/discard list 154 is inspected at the scan index in step 216 to see if the stream at that entry matches the “from” channel data stream ID. If so, the forward/discard list is marked for discard at the entry specified by the scan index in step 218 .
  • step 220 the corresponding index list entry is marked as discarded as well.
  • step 222 the scan index is incremented to find additional packets associated with the “from” channel. If the scan index is incremented to the tail of the forward/discard list in step 224 , then all such packets have been found; otherwise the forward/discard list is searched again in step 216 - 220 . In the event that an entry does not have a video stream ID that matches the “from” channel in step 216 , then the scan index in incremented in step 222 .
  • FIGS. 13-21 provide illustration of the operation of the multiplexer 30 .
  • FIG. 13 illustrates an instance of the initial state of the multiplexer 30 , where the physical video buffer has a buffer depth of 26, with packets in physical video buffer 158 currently has five packets (S1/P0-length 3, S1, P0-length 6, S0/P0-length 3, S0/P1-length 6 and S1/P0-length 3).
  • the fill level of the current packets is 21, leaving a length of 5 for new packets.
  • An incoming packet (S0/P0) with a length of 3 is received at the enqueue microblock 160 .
  • the new packet is enqueued, since there is sufficient space in the physical video buffer 158 .
  • the forward/discard list 154 adds the newly enqueued packet as index “5” in the list denoting the packet as associated with stream “0”.
  • index list 152 for P0 is updated to reference the index (5) of the newly enqueued packet.
  • An entry is made in the video metadata buffer 156 , which is associated with the packet in the physical video buffer 158 .
  • the new buffer fill level is now 24, since the new packet increased the level by three.
  • another incoming packet (S0/P1-length 6) is at the enqueue microblock 160 . Since the packet has a length of six and the physical video buffer 158 has only a length of two available, lower priority packets in the buffer must be marked for discard to accommodate the new packet. As described above, the enqueue microblock looks for packets of low priority (i.e., P0 packets) to discard. Since there are entries in the P0 index buffer 152 , there are available packets to discard.
  • P0 packets packets of low priority
  • the first packet indicated at the head of the P0 index buffer 152 (index 0) is marked for discard in both the index buffer 152 and the forward/discard buffer 154 .
  • This packet although marked for deletion, remains in the physical video buffer 158 ; however, three units are added to the available length (now five units), because the packet marked for discard will not affect the time for a new packet to move to the front of the physical video buffer.
  • the packet marked for discard at the head of the physical video buffer 158 is removed. Because it will not be forwarded, its data will simply be overwritten by the data behind it in the FIFO. This also causes the head of the forward/discard list 154 to rotate so that index “1” is at the front of the list.
  • the packet at the front of the physical video buffer 158 is forwarded to the DSL forwarding circuitry for transmission on link 31 .
  • the packet and its metadata is forwarded, and the forward/discard list 154 is updated such that index “2” is moved to the head.
  • a channel change is initiated by the user, switching away from data stream “0”. Accordingly, the enqueue microblock 160 scans the entries of the forward/discard list 154 for packets with data stream “0”, of which two are listed at index “5” and index “6”.
  • the two packets at indices “5” and “6” are marked for discard in the forward/discard list 154 and the index lists 152 for these packets are also updated.
  • FIGS. 9-21 present invention ensures that higher priority packets are delivered to the customer premises, if at all possible. By maintaining lists of packets by priority level, lower priority packets can be easily found without scanning an entire list of packets.
  • FIG. 22 illustrates a state diagram showing operation of a receiver 22 that can selectively request retransmission of packet or attempt to conceal errors.
  • state 240 the receiver 22 is in normal mode, receiving packets, decoding the information from the packets and generating video output.
  • the type of frame is detected in state 242 .
  • the frame type will depend upon the protocol; in general within a protocol, the frame type can be determined based on a known order of frame types set by the encoding device. If the missing frame is of a type that can be concealed, error recovery is performed in state 244 . If the missing frame is of a type that cannot be concealed, for example an I-frame or a video anchor frame, then retransmission is requested in state 246 .

Abstract

A receiver for generating an video output from a stream of data packets includes circuitry for decoding the stream of packets into a video signal, circuitry for generating video frames from the video signal, circuitry for detecting whether a missing packet is associated with a video frame of a first type and circuitry for selectively requesting retransmission of a missing packet responsive to the detecting circuitry. The decoding circuitry further comprises circuitry for concealing errors using error recovery without requesting retransmission due to missing frames of the first type

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present U.S. Utility Patent Application claims priority pursuant to 35 U.S.C. §120, as a divisional, to U.S. Utility patent application Ser. No. 11/337,372, entitled “Video Aware Traffic Management,” (Attorney Docket No. 139444), filed Jan. 23, 2006, pending, which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility Patent Application for all purposes
  • STATEMENT OF FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Award No. 70NANB3H3053 awarded by National Institute of Standards and Technology.
  • BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • This invention relates in general to network communications and, more particularly, to a method and apparatus for discarding packets.
  • 2. Description of the Related Art
  • In a digital information delivery network, between a source device and a destination device, packets of data may be lost for a variety of reasons. Some packets are randomly lost due to uncontrollable errors—for example, errors caused by noise on a transmission line, synchronization issues, etc. Some packets are lost due to congestion, i.e., it is not possible for a network element to transmit all received packets in a timely manner. Current discard mechanisms for IP QoS (quality of service) algorithms implement random selection schemes to determine which packets to discard without regard to the relative effect on the eventual output.
  • For some data transfer protocols, missing packets cause the destination device to request a retransmission of the missing information. This is not very feasible, however, in a network that has multicasting of real-time streams such as audio or video. Normally, there will not be enough time available for requesting and receiving the retransmitted packets, unless buffers at the destination device are very large.
  • When an expected packet in a packet stream is not received at the destination device, the destination device waits for a certain amount of time before declaring a packet as lost. Once a packet is declared as lost, some decoders may request retransmission, other decoders may correct the problem to the extent possible by error concealment techniques. Error concealment techniques will in most cases result in degradation of output quality and are incapable of correcting some errors; further, the degree of the output error will be different depending upon the type of data in the lost packet, some of which will be more difficult to conceal than others. Thus, if packets must be discarded, some types of packets will be better candidates for discarding than others.
  • Accordingly, there is a need for a method and apparatus for identifying and discarding packets to minimize output errors.
  • BRIEF SUMMARY OF THE INVENTION
  • In a first aspect of the present invention, a receiver for generating an video output from a stream of data packets comprises circuitry for generating video frames from the packets and circuitry for decoding the stream of packets into a video signal, where the decoding circuitry includes circuitry for concealing errors due to missing frames of a first type. When a missing packet is detected, the receiver selectively conceals the error or requests retransmission, based on whether the missing packet is of said first type.
  • This aspect of the present invention provides for superior receiving performance by concealing errors due to missing or corrupt low priority video frames and requesting retransmission only when high priority video frames are missing or corrupt.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 illustrates a block diagram of a IP video delivery system;
  • FIG. 2 illustrates a block diagram of a multiplexer of FIG. 1;
  • FIG. 3 illustrates how congestion can occur at the multiplexer because of aggregated data rates can exceed expected average aggregated data rates;
  • FIG. 4 illustrates a block diagram of a first embodiment of a multiplexer;
  • FIG. 5 illustrates a diagram of a fragmented video frame;
  • FIG. 6 illustrates a flow chart describing operation of queue entry logic for the multiplexer of FIG. 4;
  • FIG. 7 illustrates a flow chart describing operation of the dequeue logic for the multiplexer of FIG. 4;
  • FIG. 8 illustrates a flow chart describing operation of channel change logic for the multiplexer of FIG. 4;
  • FIG. 9 illustrates a block diagram of a second embodiment of a multiplexer;
  • FIG. 10 illustrates a flow chart describing the operation of an enqueue microblock for the multiplexer of FIG. 9;
  • FIG. 11 illustrates a flow chart describing the operation of an dequeue microblock for the multiplexer of FIG. 9;
  • FIG. 12 illustrates a flow chart describing the operation of an channel change logic for the multiplexer of FIG. 9;
  • FIGS. 13 through 21 illustrate an example of the operation of the multiplexer of FIG. 9;
  • FIG. 22 illustrates a state diagram showing operation for a receiver that selectively corrects errors by requesting retransmission or by error recovery techniques.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention is best understood in relation to FIGS. 1-22 of the drawings, like numerals being used for like elements of the various drawings.
  • FIG. 1 shows a block diagram of an IP video network 10 for sending video programming to a site 12. Sources (such as video head ends, or VHEs) 20 provide the programming by streaming video information in packets. The packets are ultimately received by one or more IP video receivers 22 at the site 12. The IP video receivers 22 translate the video packets to video for video monitors 24. To get to the IP video receivers 22, the data must pass through a public/private network 26 which may include a plurality of routers, including edge router 28. The output of edge router 28 is received by multiplexer 30 (which could be, for example, a DSLAM access element), where the data for multiple video channels is multiplexed onto twisted pair lines 31. A modem 32 (such as a DSL modem) on the user site communicates between the multiplexer 30 and the IP video receivers 22 through on-site router 34.
  • In operation, the VHE sources 20 stream video information to the IP video receivers 22. For live video broadcasts, such as a live television signal, the video data is typically sent as a multicast transmission. For on-demand video, unicast transmission may be used. At the receiver side, on-demand video generally has a longer buffer, since the delay from source 20 to viewing is not important as broadcast video servers and, thus, on-demand video has a lower priority than live broadcast video services. The site 12 may have several IP video receivers 22 each receiving multiple streams of programming. For example, each IP video receiver 22 could receive two video data streams. If there were three IP video receivers 22 in the site 12, and each receiver 22 was receiving two video streams, then the link 31 between the multiplexer 30 and the modem 32 would be carrying video packets for six different data streams.
  • Modern day video protocols compress the video stream by periodically sending a full frame (compressed) of video data, followed by differential frames which indicate the changes between frames, rather than the frame itself. Accordingly, a scene which has a rapidly changing image will require a higher bandwidth than a frame that is relatively still. The total available bandwidth between the video heads 20 and the IP receivers 22 for a site 12 is generally fixed by the bandwidth of link 31, in view of the technology used by the multiplexer 30 and modem 32.
  • With a fixed bandwidth in which to transfer all packets for all data streams for a site 12, the number of data streams supported by the link 31 is determined by an average bandwidth for each received channel (link 31 can be carrying other data traffic such as Internet traffic, which has a lower priority than the live video data streams (lowest priority) and voice (VOIP—voice over Internet protocol, which generally has the highest priority). However, the data rates for the separate N data flows are not constant. At times, multiple channels may be simultaneously using more than their average bandwidth, resulting in congestion on link 31.
  • FIG. 2 illustrates a block diagram of the multiplexer 30 supporting N different data streams. For a system designed to provide viewing up to two data streams over three receivers 22, N would equal six. An input stage 40 receives various video streams and forwards packets to a FIFO (first in, first out) memories 42 (alternatively, multiple FIFOs could be used for respective data streams). An output stage 44 multiplexes packets from the FIFO memory 42 onto the link 31 (via DSL scheduling circuitry, not shown). At the site 12, router 34 directs packets to the proper receiver 22. Traffic Management System 46 controls the multiplexing of the packets from memories 42 onto the link 31, as described in greater detail below.
  • The congestion problem is illustrated in FIG. 3. When the combined data rates from the N sources exceed the capacity of the link 31 and the capacity of the multiplexer 30 to buffer the overage in its FIFO memories 42, the traffic management system 46 must make intelligent decisions about which packets to discard to minimize any adverse effects on data service to the end user. The situation is illustrated in FIG. 3, which employs only two data sources (N=2). In FIG. 3, data packets come from Source A and Source B. Each source implements a policy to provide data at a known average rate. The data from the two sources must be merged onto link 31, which has a capacity to accommodate the combined average data rates. Limited buffering is available from the FIFO memories 42; however, it is desirable to keep the FIFO memories as small as possible; otherwise a noticeable delay will occur when live video channels are switched. When the combined data rates from the sources exceeds the average for too long, the capacity to buffer the excess data is exceeded, and some packets must be dropped. Even if the multiplexer 30 has the memory capacity to buffer additional packets, it may need to drop packets because of timing considerations associated with its FIFO 42. For example, a multiplexer 30 may have a requirement that all packets will be sent within 200 ms of receiving that packet. If the condition cannot be met for an incoming packet, the multiplexer will either need to not place the incoming packet in the FIFO 42 or drop packets already in the FIFO 42.
  • In operation, the multiplexer 30 is designed to minimize the effect of dropping packets. A critical aspect of the problem is that all packets are time-critical. For each data stream all packets are generated on a single server (VHE 20). Once generated, each packet has a strict “use by” time. A packet that becomes stale in transit to the end user becomes unusable. To conserve shared link bandwidth, stale packets must be discarded without being transmitted over the link 31.
  • In operation, multiplexer 30 conforms to a policy that requires the minimum degradation of service to the end user when packets are discarded. This goal is accomplished in two basic ways: (1) the multiplexer 30 discards the minimum amount of data necessary to avoid congestion and (2) the multiplexer 30 makes use of a priority scheme to ensure the least useful packets are preferentially discarded.
  • FIG. 4 illustrates a more detailed block diagram of the multiplexer 30 of FIG. 2, showing an embodiment which makes use of packets containing priority indicators. It is assumed that the priority indicators are generated by the video head end 20. For the illustrated embodiment, a two bit priority (four possible priority values) is used with “00” binary being the lowest priority and “11” binary being the highest priority.
  • The traffic management system 46 is split into queue entry logic 50, dequeue logic 52, channel change logic 54 and forward prediction logic 56. Each priority level has a threshold level in the FIFO 42, i.e., a P00 (“Priority 00”) threshold, a P01 threshold, a P10 threshold and a P11 threshold. Additionally, there is an Initial Hold-off threshold. When a threshold level is exceeded, a flag is set (a “P00 FG” notation is used to represent the flag from priority “00”). It is assumed that the thresholds are based on a time-to-dequeue statistic. In other words, if the P00 threshold is set to 50 msec, it is exceeded if there are packets in the queue which will not be dequeued within 50 msec. Since there may be packets in the FIFO 42 that will not be transmitted, the physical location of a packet may not be indicative of whether a threshold level has been exceeded.
  • In the illustrated embodiment, a single FIFO 42 is used for multiple channels (multiple data streams). In the preferred embodiment, the low priority flags, P00 FG and P01 FG, are maintained on a global basis, i.e., one flag is used to indicate that a packet has exceeded a threshold, regardless of the channel associated with that packet. The higher priority flags, P10 FG and P11 FG, are maintained on a per channel basis; for example, if a packet on channel “1” exceeds the “10” threshold, the P10 flag is set for channel “1”, but not for channel “2” (in the illustrated embodiment, only two channels are shown, although an actual embodiment may support more channels).
  • For background purposes, FIG. 5 illustrates the association between video frames and packets. Throughout the network 10, information is typically passed as Ethernet packets 58. Some video frames will be larger than an Ethernet packet 58 and, hence, must be fragmented into multiple packets. The receiver 22 will then group the packets back into video frames for decoding. In the preferred embodiment, as described below, if any packet of a video frame is discarded, then surrounding Ethernet frames are inspected by the traffic management system 46; for any frame in which a packet has been discarded, any remaining packets associated with that frame will be discarded as well, since these packets will have no value to the receivers 22. It should be noted that Ethernet frames occasionally are received out-of-order, and therefore the traffic management system 46 should search a sufficient distance from a discarded packet to ensure that all associated Ethernet frames have been properly inspected.
  • FIG. 6 illustrates a flow chart describing the operation of the queue entry logic 50. The steps in FIG. 6 indicate the operation of the queue entry logic 50 for each packet that is received. In step 60 it is determined whether the initial hold-off threshold has been met. Until the hold-off threshold is met, no packets are dropped, even if the other priority thresholds have been exceeded. Once the initial hold-off threshold is met, subsequent packets will be checked to see if the priority thresholds are exceeded. In step 62, the queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P00 being exceeded. If so, the P00 flag is set in step 64 and queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P01 being exceeded in step 66. If the P01 threshold is not exceeded in step 66, the queue entry logic 50 determines whether the packet is a P00 packet in step 68. If so, it is discarded in step 70.
  • If in step 66, the P01 threshold is exceeded, then the P01 flag is set in step 72. The queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P10 for the associated channel being exceeded in step 74. If the priority threshold P10 threshold for the channel is not exceeded in step 74, the queue entry logic 50 determines whether the packet is a P00 or a P01 packet in step 76. If so, it is discarded in step 70.
  • If in step 74, the P10 threshold is exceeded, then the P10 flag is set in step 78. Queue entry logic 50 determines if queuing the packet in FIFO 42 will result in the priority threshold P11 for the associated channel being exceeded in step 80. If the priority threshold P11 threshold for the channel is not exceeded in step 80, the queue entry logic 50 determines whether the packet is a P00, a P01 or a P10 packet in step 82. If so, it is discarded in step 70.
  • If in step 80, the P11 threshold is exceeded, then the P11 flag is set in step 84. Queue entry logic 50 determines whether the FIFO 42 is full in step 86. If so, the packet is discarded in step 70. If the FIFO is not full, then the queue entry logic 50 determines whether the packet is a P11 packet in step 88. If not, it is discarded in step 70.
  • If the P00 threshold is not exceeded in step 62 or if the packet is determined not to be a P00 packet in step 68, or not to be a P00/P01 packet in step 76 or not to be a P00/P01/P10 packet in step 82, or is determined to be a P11 packet in step 88, then it is checked to see if it is a fragment of a frame which has had packets previously discarded in step 92. If so, it is discarded in step 70; if not, it is added to the queue in step 94.
  • After a packet is discarded in step 70, the queue entry logic 50 determines whether it is a fragment of a larger frame in step 96. If so, the frame ID is saved in step 98 to match with other fragments from the same frame.
  • It should be noted that the flags are reset upon receiving n packets during which the condition for setting the flag no longer exists. The value n is a configurable value.
  • FIG. 7 illustrates flowchart describing operation of the dequeue logic 52. Upon a packet extraction request from the DSL scheduler (a request that the next packet be sent to the DSL scheduler for transmission on link 31), the dequeue logic 52 gets the next packet at the head of the FIFO 42 in step 100 checks to see if the next packet in line for output from FIFO 42 is a packet associated with a previously discarded frame in step 102. If so, it is discarded (not output) in step 104. If not, in step 106, the dequeue logic 52 determines whether the P00 flag is set; if not, the packet is dequeued (sent to the DSL scheduler for transmission on twisted pair lines 31) in step 108. After the packet is dequeued, there may be packets at the front of the queue which are not transmit eligible. The dequeue logic 52 will discard these packets until the next transmit eligible packet appears. The dequeue logic 52 then waits for the next request from the DSL scheduler.
  • If the P00 flag is set in step 106, then the packet will be discarded if it is a P00 packet (step 112). If the packet has a priority higher than P00 in step 112, the dequeue logic 52 will determine whether the P01 flag is set in step 114. If the P01 flag is set in step 114, then the packet will be discarded if it is a P01 packet (step 116). If the P01 flag is not set in step 114, the packet will be dequeued (step 108). If it is higher then a P01 packet in step 116, the dequeue logic 52 will determine whether the P10 flag is set (for the channel associated with the packet) in step 118. If the P10 flag for the channel is set in step 118, then the packet will be discarded if it is a P10 packet (step 120). If the P01 flag is not set in step 114, the packet will be dequeued (step 108). If the packet has a priority higher P10 packet in step 120, the dequeue logic 52 will determine whether the P11 flag is set (for the channel associated with the packet) in step 122. If the P11 flag for the channel is set in step 122, then the packet will be discarded. If the P11 flag is not set, the packet will be dequeued.
  • Referring again to FIG. 4, the channel change logic 54 speeds the process of discarding packets associated with a channel no longer being watched. FIG. 8 illustrates a flow chart describing the operation of the channel change logic 54. If the user changes channel (and assuming that no other user is watching the “from” channel), the old channel number (the “from” channel ID) and new channel number (the “to” channel) are sent to the channel change logic 54. Upon receiving this information, the channel change logic works with the dequeue logic 52 to remove packets associated with the “from” channel. The channel change logic 54 also removes low priority packets associated with the “to” channel since these packets will be associated with differential frames (i.e., frames dependent upon other video frames) and therefore useless to the receivers 22. If another receiver 22 remained tuned to the “to” channel, these packets would not be dropped.
  • In step 130, the next packet is taken from the head of the FIFO 42. If it is associated with the “from” channel in step 132, it is discarded in step 134. If it is not associated with the “from” channel, but is associated with the “to” channel in step 136, the packet is discarded if it is a low priority packet (P00 or P01) in step 138. If it is a high priority packet in step 138, then the channel-change clearing process is complete in step 140.
  • Referring again to FIG. 4, the forward prediction logic 56 receives information regarding upcoming packets that have not yet been received. The forward prediction logic can therefore make estimation of when additional space will be needed to accommodate packets of a specified priority. Accordingly, the discarding functions described above can be made prior to actual congestion occurring.
  • In the embodiment described above, FIFO thresholds are used to keep packets from entering the queue based on the threshold exceeded and a priority associated with the packet. This embodiment provides a method of passing high priority packets using a minimum amount of computation resources. A second embodiment is described below which operates in a different manner. When the buffer is full (i.e., when new packets will not reach the end of the FIFO buffer within the predetermined time limit), packets within the FIFO are marked for discard within the FIFO. When these marked packets reach the head of the FIFO, they are simply not passed forward for transfer over link 31.
  • FIGS. 9-19 illustrate a second embodiment of a multiplexer 30 where the access network equipment can recognize video traffic in order to make dropping decisions during congested periods to help minimize the service quality degradation. Some types of video packets are more important for the reproduction of high quality video than other types of video packets. The video packet discard decision is made upon video packet type and current congestion level such that least important packets (those that have the least impact on picture quality) are dropped first and the next least important packets are dropped next, and so on.
  • In this embodiment, packets already in the video queue can be marked for discard. Discarding enqueued packets results in faster video queue space recovery and can contribute to faster channel change support. This embodiment assumes that the video data streams are generated by an encoder or video server 20 that follows a set of rules, or a protocol, for transport of compressed video content on an IP packet network. More than one protocol definition may be accommodated. The protocols define how packet headers are assembled, and how video content priority indicators are coded at the application layer. The embodiment further assumes that the video data transport data rate is within the range defined for a particular network implementation. In addition, this embodiment assumes that a maximum packet size is defined at the application layer such that fragmentation at the lower layers will never be required. This is to ensure that every video packet entering the access node contains the video component priority indicators.
  • In FIG. 9, a per subscriber pseudo video queue (buffer) 150 includes per-priority index lists (PILs) 152 (with one priority index list 152 for each priority level—in the illustrated embodiment, there are only two priority levels, P0 and P1) and a forward/drop list (FDL) 154. A video metadata buffer 156 has entries containing the packet metadata for each packet enqueued in the physical video packet buffer 158. The pseudo video buffer 150, video metadata buffer 156 and physical video packet buffer 158 are coupled between the enqueue microblock 160 and the dequeue microblock 162. The physical video buffer stores packets identified by the priority of the task (P0 or P1) and the data stream (channel) associated with the packet (S0 or S1). In an actual embodiment, there could be additional priority levels and more data streams would likely be supported.
  • In operation, enqueue microblock 160 and dequeue microblock 162 control the flow of packets into and out of the multiplexer 30 and maintain the contents of the pseudo video queue 150 and video metatdata buffer 156. As video packets are received, they are stored in the physical video buffer 158. Each packet in the physical video buffer 158 has its metadata stored in an associated entry of the video metadata buffer 156. The metadata information is used for further packet processing. If multiple receivers 22 are subscribed to the same channel, multiple metadata will exist in the video metadata buffer 156 for the same video stream. The video metadata buffer 156 is preferably a FIFO queue of a predetermined finite depth that maintains the metadata in the order of the video packets.
  • When congestion is detected (i.e., the time between receiving a packet and transmitting the same packet exceeds a predetermined threshold), or if the physical video buffer 158 is full, packets within the physical video buffer 158 are marked for discard (when a packet marked for discard reaches the front of the physical video buffer 158, it will be removed without further transmission on the link 31). If there are no currently enqueued packets within the physical video buffer 158 that can be dropped to make room of the incoming packet, then the incoming packet will be dropped without enqueue.
  • The pseudo video buffer 150 is used to identify and mark packets for discard. The pseudo video buffer 150 uses circular buffers as its main data structure with head and tail pointers such that a new buffer entry is added at the tail and buffer entries are removed from the head. As shown below, this circular list data structure provides a simple mechanism to maintain the list.
  • The pseudo video buffer 150 includes a forward/drop list 154 and an index list 152 for each priority type. Each entry in the forward/drop list 154 is associated with an entry in the video metadata buffer 156. The contents of each entry in the forward/drop list 154 is either an indicator of the data stream (either “0” or “1” in the illustrated embodiment), if the packet is to be forwarded, or a discard marker (“D”) to indicate that the associated packet is to be dropped. Each priority index list 152 maintains an index of packets by priority. By maintaining a separate list of packet for each priority, packets or metadata of a certain priority can be easily located for marking without scanning the entire queue.
  • FIG. 10 is a flow chart that illustrates the operation of the enqueue microblock 160. When a video packet arrives in step 170, the enqueue microblock 160 determines whether the physical video buffer can accommodate the incoming packet in step 172. If so, the enqueue microblock 160 extracts packet information (such as priority, video stream ID, and so on) from the incoming packet and the packet is enqueued by inserting the packet's video stream ID in the forward/drop list 154 (step 174) and adds the forward/drop list index of the packet to the appropriate index list 152, depending on the priority information from the metadata (step 176). In step 178, the remaining queue level (QLevel) is adjusted to account for the newly enqueued video packet.
  • If the buffer is congested in step 172, the enqueue microblock looks at a index list 152 associated with lower priority packets (i.e., if the incoming packet is a P1 packet, the P0 index list will be used to determine whether there are lower priority packets in the physical video queue 158). If the appropriate index list 152 is empty in step 180, the incoming packet is discarded (not enqueued) in step 182. On the other hand, if the appropriate index list 152 is not empty in step 180, the lower priority packets designated in the index list 152 are marked for discard in the forward/drop list 154 to create additional space in the physical video buffer in steps 184-188. In the preferred embodiment, the enqueue process will only mark packets for discard until enough room is recovered to enqueue the incoming packet. In step 184, a packet is identified by an entry from the appropriate index list 152; the index in that entry points to a corresponding entry in the forward/drop list. That entry is marked for discard (by a “D” in the illustrated embodiment) in step 186. The entry is then deleted from the index list 152 and the queue level is adjusted to account for the discarded packet. Control continues at step 172, where it is determined whether the queue has room for the incoming packet after discarding the packet. If so, the incoming packet is enqueued in steps 174-178. If more space is needed to enqueue the incoming packet, the index lists are again checked for lower priority packets within the queue. The process is repeated until either enough room is obtained by discarding lower priority packets or, if no more room can be created, by discarding the incoming packet.
  • The operation of the dequeue microblock 162 is shown in FIG. 11. In step 190, a send request is received by the dequeue microblock 162 from the DSL transmit scheduler. If the forward/discard list (FDL) 154 is empty in step 192, then there are no packets to send at the current time. On the other hand, if the forward/discard list 154 indicates that there are packets to sent in step 192, then the next entry in the forward discard list 154 for the next packet to sent is dequeued in step 194, as well as the corresponding entry from the index list 192. In step 198, if the entry from the forward/discard list indicates that the packet has been marked for discard, then control returns to step 192 to look at the next packet in the physical video buffer 158. On the other hand, if the entry does not indicate that the packet is not marked for discard, then the metadata for the packet is retrieved in step 200 and the packet is forwarded in step 202.
  • FIG. 12 illustrates a flow chart describing the operation of channel change logic. Fast channel change support can be accomplished by discarding the packets in the video buffer as soon as possible that are related to the previous video channel to make room for new video stream. When a channel change notice is received in step 210, a scan index (scanidx) is set to the head of the forward/discard list in step 212 and the video channel ID for the “from” channel is retrieved in step 214. The forward/discard list 154 is inspected at the scan index in step 216 to see if the stream at that entry matches the “from” channel data stream ID. If so, the forward/discard list is marked for discard at the entry specified by the scan index in step 218. In step 220, the corresponding index list entry is marked as discarded as well. In step 222, the scan index is incremented to find additional packets associated with the “from” channel. If the scan index is incremented to the tail of the forward/discard list in step 224, then all such packets have been found; otherwise the forward/discard list is searched again in step 216-220. In the event that an entry does not have a video stream ID that matches the “from” channel in step 216, then the scan index in incremented in step 222.
  • FIGS. 13-21 provide illustration of the operation of the multiplexer 30. FIG. 13 illustrates an instance of the initial state of the multiplexer 30, where the physical video buffer has a buffer depth of 26, with packets in physical video buffer 158 currently has five packets (S1/P0-length 3, S1, P0-length 6, S0/P0-length 3, S0/P1-length 6 and S1/P0-length 3). Hence the fill level of the current packets is 21, leaving a length of 5 for new packets. An incoming packet (S0/P0) with a length of 3 is received at the enqueue microblock 160.
  • In FIG. 14, the new packet is enqueued, since there is sufficient space in the physical video buffer 158. The forward/discard list 154 adds the newly enqueued packet as index “5” in the list denoting the packet as associated with stream “0”. Likewise, index list 152 for P0 is updated to reference the index (5) of the newly enqueued packet. An entry is made in the video metadata buffer 156, which is associated with the packet in the physical video buffer 158. The new buffer fill level is now 24, since the new packet increased the level by three.
  • In FIG. 15, another incoming packet (S0/P1-length 6) is at the enqueue microblock 160. Since the packet has a length of six and the physical video buffer 158 has only a length of two available, lower priority packets in the buffer must be marked for discard to accommodate the new packet. As described above, the enqueue microblock looks for packets of low priority (i.e., P0 packets) to discard. Since there are entries in the P0 index buffer 152, there are available packets to discard.
  • In FIG. 16, the first packet indicated at the head of the P0 index buffer 152 (index 0) is marked for discard in both the index buffer 152 and the forward/discard buffer 154. This packet, although marked for deletion, remains in the physical video buffer 158; however, three units are added to the available length (now five units), because the packet marked for discard will not affect the time for a new packet to move to the front of the physical video buffer.
  • In FIG. 17, with five units available for a new packet and an incoming packet with a length of six, additional packets must be discarded if the incoming packet is to be enqueued. Since there are still entries in the P0 index buffer, there are more packets to discard. Hence, the packet indicated at the head of the P0 index buffer 152 (index 3) is marked for discard in both the index buffer 152 and the forward/discard buffer 154. This results in an available length of eight, allowing the incoming packet to be enqueued in physical video buffer 158. The newly enqueued packet is represented in the forward/discard list 154 at index 6 (denoting the packet as being associated with stream “0”). This information is also added to the P1 index list 152 and the packets metadata is stored in the video metadata buffer 156.
  • In FIG. 18, the packet marked for discard at the head of the physical video buffer 158 is removed. Because it will not be forwarded, its data will simply be overwritten by the data behind it in the FIFO. This also causes the head of the forward/discard list 154 to rotate so that index “1” is at the front of the list.
  • In FIG. 19, the packet at the front of the physical video buffer 158 is forwarded to the DSL forwarding circuitry for transmission on link 31. The packet and its metadata is forwarded, and the forward/discard list 154 is updated such that index “2” is moved to the head.
  • In FIG. 20, a channel change is initiated by the user, switching away from data stream “0”. Accordingly, the enqueue microblock 160 scans the entries of the forward/discard list 154 for packets with data stream “0”, of which two are listed at index “5” and index “6”.
  • In FIG. 21, the two packets at indices “5” and “6” are marked for discard in the forward/discard list 154 and the index lists 152 for these packets are also updated.
  • The embodiment of the invention described in FIGS. 9-21 present invention ensures that higher priority packets are delivered to the customer premises, if at all possible. By maintaining lists of packets by priority level, lower priority packets can be easily found without scanning an entire list of packets.
  • In either embodiment described herein, the receivers may be faced with lost packets. FIG. 22 illustrates a state diagram showing operation of a receiver 22 that can selectively request retransmission of packet or attempt to conceal errors. In state 240, the receiver 22 is in normal mode, receiving packets, decoding the information from the packets and generating video output. When the receiver 22 detects a missing frame, the type of frame is detected in state 242. The frame type will depend upon the protocol; in general within a protocol, the frame type can be determined based on a known order of frame types set by the encoding device. If the missing frame is of a type that can be concealed, error recovery is performed in state 244. If the missing frame is of a type that cannot be concealed, for example an I-frame or a video anchor frame, then retransmission is requested in state 246.
  • Although the Detailed Description of the invention has been directed to certain exemplary embodiments, various modifications of these embodiments, as well as alternative embodiments, will be suggested to those skilled in the art. The invention encompasses any modifications or alternative embodiments that fall within the scope of the Claims.

Claims (10)

1. A receiver for generating an video output from a stream of data packets, comprising:
circuitry for decoding the stream of packets into a video signal;
circuitry for generating video frames from the video signal;
circuitry for detecting whether a missing packet is associated with a video frame of a first type; and
circuitry for selectively requesting retransmission of a missing packet responsive to the detecting circuitry;
wherein said decoding circuitry further comprises circuitry for concealing errors using error recovery without requesting retransmission due to missing frames of the first type.
2. The receiver of claim 1 wherein the detecting circuitry further comprises circuitry for determining a position of a video frame associated with a missing packet within an order of received frames.
3. The receiver of claim 1 further comprising:
circuitry for detecting whether a missing packet is associated with a video frame of a second type.
4. The receiver of claim 3 wherein the second type is an I-frame or a video anchor frame.
5. The receiver of claim 3 wherein the requesting retransmission circuitry further comprises circuitry for requesting retransmission of said missing packet when said missing packet is associated with a video frame of the second type.
6. A method for generating a video output from a stream of data packets in a receiver, comprising:
decoding the stream of packets into a video signal;
generating video frames from the video signal;
upon determining that a packet is missing from the stream, detecting a type of video frame associated with the missing packet and responsive to the type, selectively:
concealing errors using error recovery without requesting retransmission due to missing frames of a first type; or
requesting retransmission of a missing packet.
7. The method of claim 6 wherein the detecting step comprises the step of determining a position of a video frame associated with a missing packet within an order of received frames.
8. The method of claim 6 further comprising:
detecting whether a missing packet is associated with a video frame of a second type.
9. The method of claim 8 wherein the second type is an I-frame or a video anchor frame.
10. The method of claim 8 wherein the requesting retransmission step further comprises requesting retransmission of said missing packet when said missing packet is associated with a video frame of the second type.
US12/511,765 2006-01-23 2009-07-29 Video Aware Traffic Management Abandoned US20100034289A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/511,765 US20100034289A1 (en) 2006-01-23 2009-07-29 Video Aware Traffic Management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/337,372 US7609709B2 (en) 2006-01-23 2006-01-23 Video aware traffic management
US12/511,765 US20100034289A1 (en) 2006-01-23 2009-07-29 Video Aware Traffic Management

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/337,372 Division US7609709B2 (en) 2006-01-23 2006-01-23 Video aware traffic management

Publications (1)

Publication Number Publication Date
US20100034289A1 true US20100034289A1 (en) 2010-02-11

Family

ID=37963527

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/337,372 Active 2028-04-17 US7609709B2 (en) 2006-01-23 2006-01-23 Video aware traffic management
US12/511,765 Abandoned US20100034289A1 (en) 2006-01-23 2009-07-29 Video Aware Traffic Management

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/337,372 Active 2028-04-17 US7609709B2 (en) 2006-01-23 2006-01-23 Video aware traffic management

Country Status (3)

Country Link
US (2) US7609709B2 (en)
EP (1) EP1811726B1 (en)
CN (1) CN101009847B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120170523A1 (en) * 2010-12-30 2012-07-05 Mehmet Reha Civanlar Scalable video sender over multiple links
US8730800B2 (en) 2008-11-17 2014-05-20 Huawei Technologies Co., Ltd. Method, apparatus, and system for transporting video streams
US20150163555A1 (en) * 2006-02-13 2015-06-11 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675872B2 (en) 2004-11-30 2010-03-09 Broadcom Corporation System, method, and apparatus for displaying pictures
US8028319B2 (en) 2006-05-31 2011-09-27 At&T Intellectual Property I, L.P. Passive video caching for edge aggregation devices
WO2008038261A2 (en) 2006-09-26 2008-04-03 Liveu Ltd. Remote transmission system
EP1936854B1 (en) * 2006-12-20 2013-11-06 Alcatel Lucent Retransmission-based DSLAM and xDSL modem for lossy media
US7936677B2 (en) * 2007-03-22 2011-05-03 Sharp Laboratories Of America, Inc. Selection of an audio visual stream by sampling
CN101453296B (en) * 2007-11-29 2012-06-13 中兴通讯股份有限公司 Waiting queue control method and apparatus for convolutional Turbo code decoder
US8855211B2 (en) * 2008-01-22 2014-10-07 At&T Intellectual Property I, Lp Method and apparatus for managing video transport
EP2245770A1 (en) 2008-01-23 2010-11-03 LiveU Ltd. Live uplink transmissions and broadcasting management system and method
CN101499245B (en) * 2008-01-30 2011-11-16 安凯(广州)微电子技术有限公司 Asynchronous first-in first-out memory, liquid crystal display controller and its control method
US8665281B2 (en) * 2008-02-07 2014-03-04 Microsoft Corporation Buffer management for real-time streaming
US20090268732A1 (en) * 2008-04-29 2009-10-29 Thomson Licencing Channel change tracking metric in multicast groups
US8363548B1 (en) * 2008-12-12 2013-01-29 Rockstar Consortium Us Lp Method and system for packet discard precedence for video transport
US7826469B1 (en) 2009-03-09 2010-11-02 Juniper Networks, Inc. Memory utilization in a priority queuing system of a network device
GB2473258A (en) 2009-09-08 2011-03-09 Nds Ltd Dynamically multiplexing a broadcast stream with metadata-based event inclusion decisions and priority assignment in case of conflict
US8443097B2 (en) * 2010-04-12 2013-05-14 Alcatel Lucent Queue management unit and method for streaming video packets in a wireless network
CN101807212B (en) * 2010-04-30 2013-05-08 迈普通信技术股份有限公司 Caching method for embedded file system and embedded file system
US9379756B2 (en) 2012-05-17 2016-06-28 Liveu Ltd. Multi-modem communication using virtual identity modules
TWI458315B (en) * 2012-09-12 2014-10-21 Wistron Corp Method and system for providing digital content in a network environment
US20140164640A1 (en) * 2012-12-11 2014-06-12 The Hong Kong University Of Science And Technology Small packet priority congestion control for data center traffic
JP2014150438A (en) * 2013-02-01 2014-08-21 Toshiba Corp Reception data processing device and reception data processing method
US9369921B2 (en) 2013-05-31 2016-06-14 Liveu Ltd. Network assisted bonding
US9980171B2 (en) 2013-03-14 2018-05-22 Liveu Ltd. Apparatus for cooperating with a mobile device
JP5915820B2 (en) * 2014-03-03 2016-05-11 日本電気株式会社 COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, AND COMMUNICATION CONTROL PROGRAM
US10284485B2 (en) * 2014-07-08 2019-05-07 Telefonaktiebolaget Lm Ericsson (Publ) Communication nodes, methods therein, computer programs and a computer-readable storage medium
US10986029B2 (en) 2014-09-08 2021-04-20 Liveu Ltd. Device, system, and method of data transport with selective utilization of a single link or multiple links
WO2018211488A1 (en) 2017-05-18 2018-11-22 Liveu Ltd. Device, system, and method of wireless multiple-link vehicular communication
US9769043B2 (en) * 2014-09-22 2017-09-19 Avaya Inc. Adaptive management of a media buffer
CH710363B1 (en) 2014-11-13 2018-08-15 Styfologie Center Gmbh Lounger with detection device for body statics for medical imaging devices.
CN105681717B (en) * 2016-03-04 2019-09-17 广州星唯信息科技有限公司 The distributed storage method and system of public transport vehicle-mounted monitoring video
CN108206787A (en) * 2016-12-17 2018-06-26 北京华为数字技术有限公司 A kind of congestion-preventing approach and device
CN107070804B (en) * 2017-03-15 2019-09-20 清华大学 It joins the team label and badge remembers the display congestion marking method and device that combines out
WO2018203336A1 (en) 2017-05-04 2018-11-08 Liveu Ltd. Device, system, and method of pre-processing and data delivery for multi-link communications and for media content
DE102018129813A1 (en) * 2018-11-26 2020-05-28 Beckhoff Automation Gmbh Data transmission method and automation communication network
DE102018129809A1 (en) * 2018-11-26 2020-05-28 Beckhoff Automation Gmbh Distribution node, automation network and method for transmitting telegrams
US11627185B1 (en) * 2020-09-21 2023-04-11 Amazon Technologies, Inc. Wireless data protocol

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076181A (en) * 1998-03-03 2000-06-13 Nokia Mobile Phones Limited Method and apparatus for controlling a retransmission/abort timer in a telecommunications system
US20020004838A1 (en) * 2000-03-02 2002-01-10 Rolf Hakenberg Data transmission method and apparatus
US20020114283A1 (en) * 2000-08-30 2002-08-22 The Chinese University Of Hong Kong System and method for error-control for multicast video distribution
US20030156603A1 (en) * 1995-08-25 2003-08-21 Rakib Selim Shlomo Apparatus and method for trellis encoding data for transmission in digital data transmission systems
US6700893B1 (en) * 1999-11-15 2004-03-02 Koninklijke Philips Electronics N.V. System and method for controlling the delay budget of a decoder buffer in a streaming data receiver
US7114002B1 (en) * 2000-10-05 2006-09-26 Mitsubishi Denki Kabushiki Kaisha Packet retransmission system, packet transmission device, packet reception device, packet retransmission method, packet transmission method and packet reception method

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5231633A (en) 1990-07-11 1993-07-27 Codex Corporation Method for prioritizing, selectively discarding, and multiplexing differing traffic type fast packets
DE19640069A1 (en) * 1996-09-28 1998-04-09 Alsthom Cge Alcatel Connection establishment procedure as well as switching center and service control facility
JPH11145987A (en) * 1997-11-12 1999-05-28 Nec Corp Cell selection and abolition system in atm switch
US6629318B1 (en) * 1998-11-18 2003-09-30 Koninklijke Philips Electronics N.V. Decoder buffer for streaming video receiver and method of operation
JP3386117B2 (en) * 2000-01-11 2003-03-17 日本電気株式会社 Multilayer class identification communication device and communication device
US6982956B2 (en) * 2000-04-26 2006-01-03 International Business Machines Corporation System and method for controlling communications network traffic through phased discard strategy selection
JP2003152752A (en) * 2001-08-29 2003-05-23 Matsushita Electric Ind Co Ltd Data transmission/reception method
US7395346B2 (en) * 2003-04-22 2008-07-01 Scientific-Atlanta, Inc. Information frame modifier
US7295519B2 (en) * 2003-06-20 2007-11-13 Motorola, Inc. Method of quality of service based flow control within a distributed switch fabric network
US7630306B2 (en) * 2005-02-18 2009-12-08 Broadcom Corporation Dynamic sharing of a transaction queue
US20060187828A1 (en) * 2005-02-18 2006-08-24 Broadcom Corporation Packet identifier for use in a network device
US7522622B2 (en) * 2005-02-18 2009-04-21 Broadcom Corporation Dynamic color threshold in a queue

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156603A1 (en) * 1995-08-25 2003-08-21 Rakib Selim Shlomo Apparatus and method for trellis encoding data for transmission in digital data transmission systems
US6076181A (en) * 1998-03-03 2000-06-13 Nokia Mobile Phones Limited Method and apparatus for controlling a retransmission/abort timer in a telecommunications system
US6700893B1 (en) * 1999-11-15 2004-03-02 Koninklijke Philips Electronics N.V. System and method for controlling the delay budget of a decoder buffer in a streaming data receiver
US20020004838A1 (en) * 2000-03-02 2002-01-10 Rolf Hakenberg Data transmission method and apparatus
US20020114283A1 (en) * 2000-08-30 2002-08-22 The Chinese University Of Hong Kong System and method for error-control for multicast video distribution
US20100313096A1 (en) * 2000-08-30 2010-12-09 Sony Corporation Error control in multicast video distribution
US7114002B1 (en) * 2000-10-05 2006-09-26 Mitsubishi Denki Kabushiki Kaisha Packet retransmission system, packet transmission device, packet reception device, packet retransmission method, packet transmission method and packet reception method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150163555A1 (en) * 2006-02-13 2015-06-11 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US9860602B2 (en) * 2006-02-13 2018-01-02 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US10917699B2 (en) 2006-02-13 2021-02-09 Tvu Networks Corporation Methods, apparatus, and systems for providing media and advertising content over a communications network
US11317164B2 (en) 2006-02-13 2022-04-26 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US8730800B2 (en) 2008-11-17 2014-05-20 Huawei Technologies Co., Ltd. Method, apparatus, and system for transporting video streams
US20120170523A1 (en) * 2010-12-30 2012-07-05 Mehmet Reha Civanlar Scalable video sender over multiple links

Also Published As

Publication number Publication date
US7609709B2 (en) 2009-10-27
CN101009847A (en) 2007-08-01
US20070171928A1 (en) 2007-07-26
EP1811726B1 (en) 2017-11-29
CN101009847B (en) 2010-09-22
EP1811726A2 (en) 2007-07-25
EP1811726A3 (en) 2008-05-28

Similar Documents

Publication Publication Date Title
US7609709B2 (en) Video aware traffic management
US8495688B2 (en) System and method for fast start-up of live multicast streams transmitted over a packet network
EP1811725B1 (en) Packet discard in a multiplexer
US8208483B2 (en) Ethernet switching
US8867340B2 (en) Discarded packet indicator
US8503455B2 (en) Method for forwarding packets a related packet forwarding system, a related classification device and a related popularity monitoring device
US8804754B1 (en) Communication system and techniques for transmission from source to destination
US7133362B2 (en) Intelligent buffering process for network conference video
US20050152397A1 (en) Communication system and techniques for transmission from source to destination
US20070076604A1 (en) Multimedia data flow dropping
US7698617B2 (en) Intelligent switch and method for retransmitting a lost packet to decoder(s)
US20110085551A1 (en) Staggercasting method and apparatus using type of service (tos) information
US20120320757A1 (en) Method and Node in an Internet Protocol Television (IPTV) Network
Tham et al. Congestion adaptation and layer prioritization in a multicast scalable video delivery system
EP2051474A1 (en) Media acceleration in congestions assigned by IPD
EP2034736A1 (en) Method and device for processing data and communication system comprising such device
CN112272310A (en) Data forwarding control method, system and storage medium
Werdin Transporting live video over high packet loss networks
Feng et al. Scalable video transmission over priority network

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION