WO2006061801A1 - Wireless video streaming using single layer coding and prioritized streaming - Google Patents

Wireless video streaming using single layer coding and prioritized streaming Download PDF

Info

Publication number
WO2006061801A1
WO2006061801A1 PCT/IB2005/054140 IB2005054140W WO2006061801A1 WO 2006061801 A1 WO2006061801 A1 WO 2006061801A1 IB 2005054140 W IB2005054140 W IB 2005054140W WO 2006061801 A1 WO2006061801 A1 WO 2006061801A1
Authority
WO
WIPO (PCT)
Prior art keywords
frames
recited
video
frame
levels
Prior art date
Application number
PCT/IB2005/054140
Other languages
French (fr)
Inventor
Richard Y. Chen
Yingwei Chen
Original Assignee
Koninklijke Philips Electronics, N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics, N.V. filed Critical Koninklijke Philips Electronics, N.V.
Priority to EP05823206A priority Critical patent/EP1825684A1/en
Priority to JP2007545071A priority patent/JP2008523689A/en
Priority to US11/721,225 priority patent/US20090232202A1/en
Publication of WO2006061801A1 publication Critical patent/WO2006061801A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/188Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6131Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network

Definitions

  • wireless connectivity in communications continues to increase.
  • Devices that benefit from wireless connectivity include portable computers, portable handsets, personal digital assistants (PDAs) and entertainment systems, to name just a few.
  • PDAs personal digital assistants
  • entertainment systems to name just a few.
  • advances in wireless connectivity have lead to faster and more reliable communications over the wireless medium, certain technologies have lagged others in both quality and speed.
  • One such technology is video technology.
  • bandwidth requirements of video signals are comparatively high, video communication can tax the bandwidth limits of known wireless networks. Moreover, the bandwidth of a wireless network may depend on the time of the transmission as well as the location of the transmitter. Furthermore, interference from other wireless stations, other networks, wireless devices operating in the same frequency spectrum as well as other environmental factors can degrade video signals transmitted in a wireless medium.
  • video signal quality can suffer as a result of loss of data packets.
  • digital video content is often transmitted in packets of data, which include compressed content that is coded using transform coding with motion prediction.
  • the packets are then transmitted in a stream of packets often referred to as video streaming.
  • video streaming a lost or erroneous video packet can inhibit the decoding process at the receiver.
  • drift is caused by the loss of data packets belonging to reference video frames. This loss can prevent a decoder at a receiver site from correctly decoding reference video frames.
  • lost or erroneous packet data that belong to reference video frames can result in the inability to properly reconstruct a number of frames of video subsequent to the erroneous or lost packets. This is known as prediction drift. Prediction drift occurs when the reference video frames used to compensate motion in subsequent frames at the receiver's decoder do not match those used at the coder of the transmitter. Ultimately, this can result in higher distortion in video quality or reduced or unacceptable video quality.
  • scalable video content coding technology which is also known as layered video content coding.
  • technologies include motion picture enhancement group (MPEG) -2/4 temporal, spatial and SNR scalability, MPEG-4 FGS and data partition and wavelet video coding technologies.
  • the video content is compressed and prioritized into bitstreams.
  • bitstreams are packetized/partitioned into separate sub-bitstreams (layers) having different priorities. If the bandwidth of the wireless channel is insufficient, the content layers may be dropped, allowing the base layers to be transmitted.
  • the scalable video coding technology provides benefits over known single-layer technologies, many receivers do not include decoders that are compatible with the multi-layer coded video content. Thus, the need remains to improve video transmission with single layer content coding. What is needed, therefore, is a method and apparatus of wireless communication that overcomes at least the shortcomings of known methods and apparati described above.
  • a method of communication includes providing single layer content coded video frames. The method also includes selectively assigning each of the video frames to one of a plurality of levels. In addition, the method includes selectively transmitting some or all of the video frames based on bandwidth limitations.
  • a communication link includes a receiver and a transmitter. An encoder is connected to the transmitter and is adapted to encode video signals into a plurality of single layer content coded video frames. In addition, the encoder is adapted to assign each of the video frames to one of a plurality of levels.
  • Fig. 1 is a schematic diagram of a dependency tree in accordance with an example embodiment.
  • Fig. 2 is a schematic diagram of a dependency tree in accordance with an example embodiment.
  • Fig. 3 is a schematic diagram of a dependency tree in accordance with an example embodiment.
  • Fig. 4 is a schematic diagram of a dependency tree in accordance with an example embodiment.
  • Fig. 5 is a schematic diagram of a wireless video link in accordance with an example embodiment.
  • example embodiments disclosing specific details are set forth in order to provide a thorough understanding of the present invention.
  • the present invention may be practiced in other embodiments that depart from the specific details disclosed herein.
  • descriptions of well-known devices, methods and materials may be omitted so as to not obscure the description of the present invention.
  • like numerals refer to like features throughout.
  • the example embodiments relate to methods of transmitting and receiving video streams.
  • the transmission and reception of video streams are over a wireless link.
  • video data are in a single-layer coded video stream that is packetized and arranged in a dependency structure based on priority levels.
  • the single-layer coded video bit stream is prioritized based on dependency on temporally previous frames.
  • the methods and related apparati substantially prevent prediction-drift in streaming video.
  • the methods and related apparati of the example embodiments foster adaptation of video communications in wireless networks having time and location dependent bandwidth.
  • the methods and related apparati of the example embodiments enable improved streaming video transmission in networks and links having a standard- compliant conventional single-layer decoder.
  • the wireless link is illustratively in compliance with the IEEE 802.11 protocol, its progeny and proposed amendments. Again, this is merely illustrative and it is contemplated that the methods and apparati of the example embodiments may be used in other wireless systems.
  • the wireless link may be a satellite wireless digital video broadcasting link, including high-definition terrestrial TV.
  • the methods and apparati of the example embodiments may be used to effect video transmission over wireless mobile network such as a third generation partnership project (3GPP) .
  • 3GPP third generation partnership project
  • the methods and apparati of the example embodiment may be used in connection with wired technologies such as video conferencing/videophony over telephone line and broadband IP networks.
  • Fig. 1 is a schematic representation of a dependency tree 100 in accordance with an illustrative embodiment.
  • the tree 100 includes a plurality of frames each including one or more packets encoded via a single-layer motion estimating video coding method, such as MPEG or H.264.
  • the frames may be arranged in levels based on priority.
  • a first priority level is the highest priority level; a second priority level is the next highest priority level; and a third priority level is the lowest priority level.
  • three priority levels is merely illustrative, and more than three levels may be used.
  • the priority levels may be further categorized by temporal intervals.
  • the first priority level includes packets containing compressed data of intra-coded video frames or video object plane (IVOP or I frame) .
  • the Il frame 101 includes the intra-coded video data of a frame at a particular instant of time.
  • frame 101 is an initial frame of a first Group of Picture (GOP) single-layer content coded video stream.
  • GOP Group of Picture
  • the second priority level of the example embodiment includes prediction coded video frames or VOP (PVOP) coded video frames.
  • a Pl frame 102 is in this second priority level.
  • the Pl frame 102 includes only additional information (e.g. non- static video data) .
  • the Pl frame 102 does not include redundant video information.
  • the Pl frame 102 includes motion in the video not found in the video frame of Il 101.
  • the Pl frame 101 depends on the Il frame as a reference frame, as the Il frame is used to predict the Pl frame.
  • the frame from which a subsequent frame depends is required for video reconstruction upon decoding at a receiver.
  • a P2 frame 103 is in the second priority level of the example embodiment, and includes additional data (e.g., non-static video data) not contained in Pl frame 101; and a P3 frame 104 is in the second priority level, and includes additional data (e.g. non-static video data) not contained in the P2 frame 103.
  • the P2 frame 103 depends from the Pl frame 102 and the P3 frame 104 depends from the P2 frame 103.
  • the third priority level includes bidirectional prediction coded video frames or video object plane (BVOP) . These frames depend from both the Il frame and the P2 and P3 frames. For example, the Bl frame 105 depends from the P2 frame and the Pl frame. Similarly, B frames 106-110 selectively depend from the II, Pl, P2, and P3 as shown by the arrows from one frame to another. For example, frames B3 and B4 depend from the Pl frame and the P2 frame directly and from the Il frame indirectly. As such, the B3 frame has additional data (e.g. non-static information) relative to a combination of the Pl and P2 frames.
  • BVOP video object plane
  • the higher priority level frames are used to predict the frames of the lower priority levels of the first GOP.
  • a second intra-coded frame 12 111 begins a second GOP single-layer content coded video stream.
  • This second GOP frame is later in time than the first GOP as indicated by the time axis shown. Similar to the Il frame, the second 12 frame is in the first priority level, and all prediction frames and bidirectional prediction frames in the second and third priority levels depend from this reference frame. Thus, he higher priority level frames are used to predict the frames of the lower priority levels of the first GOP.
  • each frame includes packetized video data.
  • the Il frame 101 may include two video packets; the P2 frame 103 may include one packet; and the Bl frame 105 and the B2 frame 106 may be comprised of a single packet. Accordingly, the I frames have the most data; the P frames have fewer data than the I frames, and the B frames have the least data.
  • a B frame of the third priority level is dropped, because there are no frames that depend from the B frame, the only loss is in temporal resolution, and not a lapse in the video.
  • the video image is that of the Pl frame. All motion subsequent to the Pl frame (in frames that depend from the to the Pl frames) is lost.
  • I frames are the most essential frames
  • the P-frames are the next-most essential
  • the B- frames are the least essential for motion-compensation and, subsequently, video reconstruction.
  • example embodiments include a selective priority level-based dropping of frames of streaming video to increase the likelihood that layers of higher priority are transmitted through the degraded channel.
  • the dropping of frames from lowest priority to highest priority is effected in accordance with the available bandwidth of the channel. While the dropping of streaming video frames may result in lower temporal resolution of the resultant video, the methods of the example embodiments provide an improved video quality in reduced bandwidth networks compared to known methods. Some illustrative frame dropping strategies are described presently.
  • Fig. 2 is a schematic representation of a dependency tree of single-layer content coded video in accordance with an example embodiment.
  • the prioritization is based on the VOP type.
  • a first priority level LO 201 includes the IVOP frames, Il 202 and 12 203;
  • a second priority level 204 includes the P frames, Pl 205, P2 206, P3 207 and P 208;
  • a third priority level 209 includes the B frames, Bl 210 through BlO 219 as shown.
  • the prioritization scheme of the example embodiment is used to determine the order of dropping of frames in the event that the bandwidth of a wireless medium will not support the bandwidth requirements of the GOP.
  • the illustrative method mandates that a frame is not dropped until all frames that depend on the frame are dropped. As such, lapses of frames in the chain of dependent frames are substantially avoided, which reduces video quality loss. Thus, while the temporal resolution of a video stream may be reduced, the complete loss of the video is substantially avoided.
  • the prioritization based dependency and dropping of an example embodiment are described presently.
  • the I frames are the more essential to the video stream than P frames; and the P frames are more essential than B frames.
  • the non-scalable (single-layer content coded video) bitstream can be arranged with frames in three priority levels based on the VOP types. If a bandwidth limitation of the first stream (commencing with II) mandates a dropping of frames, the method of the present illustrative embodiment requires the dropping of frames based on dependency. To this end, the frames that have no frames dependent therefrom (the B VOPs) are dropped first.
  • the frames with fewer frames dependent thereon are dropped next.
  • the P frames are dropped next.
  • the P3 frame 207 having fewer frames dependent thereon than the P2 frame 206 is dropped before the P2 frames.
  • the P frames of the second priority level Ll 204 of the example embodiment have a serial dependence shown by the arrows.
  • a frame is not dropped until all frames that depend on the frame are dropped.
  • the P2 frame 206 is not dropped until frames B3 212 through B6 215 and P3 frame 207 are dropped.
  • the GOP structure is repeated throughout the entire MPEG bitstream.
  • the original MPEG bitstream displays some degree of periodicity.
  • Fig. 3 is another prioritization scheme based on dependency of the frames in accordance with another example embodiment.
  • the prioritization method of the present example embodiment includes common features with features described in connection with the example embodiment of Fig. 2. Wherever practical, common features are not repeated so as to avoid obscuring the description of the example embodiments.
  • a frame is not dropped until all frames that depend on the frame are dropped.
  • dependency among frames of the same type is addressed.
  • the prioritization of the levels must include prioritizing the P frames to exploit this type of dependency. This may be referred to as an inter-frame dependency.
  • the prioritization based on the inter-frame dependency of the P frames is chosen merely to illustrate this prioritization method.
  • other frames may be similarly prioritized.
  • the GOPs of the video stream of the example embodiment of Fig. 3 are arranged a first priority level LO 301, a second priority level Ll 302, a third priority level L2 303, a fourth priority level L3 304 and a fifth priority level L4 305.
  • the first level LO 301 includes the most important frames; in this case Il frame 306 and 12 frame 307.
  • the second level Ll 302 includes Pl frame 308 and P frame 309.
  • the third level L2 includes P2 frame 310, which depend from Pl frame 308.
  • the fourth level L3 304 includes P3 frame 311, which depends from the P2 frame 310; and the fifth level L4 305 includes Bl frame 312 through B frame 321.
  • the frames of the fifth priority level L4 305 are dropped first, followed by those in the fourth priority level L3 304, and so forth.
  • the frames are assigned to a priority level based on their dependence on frames in higher priority. In this manner, a prioritization for dropping frames of the same type is provided.
  • Fig. 4 is a schematic diagram of a temporal prioritization scheme with a constant frame interval in accordance with an example embodiment.
  • the priority levels are also categorized by temporal intervals.
  • the periodic property of the original GOP is exploited.
  • dependency prioritization scheme such as described in connection with Figs. 2 and 3
  • the priority levels that contain I, P frames are periodic with all the VOP transmitted evenly in each individual level.
  • the priority level that contains only the B VOPs are not periodic, which complicates the system design.
  • the Bl frame 210 and the B2 frame 211 lag temporally behind the P2 frame.
  • the first priority 401 includes frames Il 407 and 12 408;
  • the second priority level 402 include frames Pl 409 and P 410;
  • the third level 403 includes frame P2 411;
  • the fourth frame 404 includes frames P3 412;
  • the fifth frame includes frames Bl 413, B3 414, B5 415, B7 416 and B 417;
  • the sixth level includes frames B2 418, B4 419, B6 420, B8 421 and B 422.
  • the B frames are temporally prioritized.
  • Bl and B2 are in the same P period of (Pl, II) ; and B3 and B4 are in the same P period of (Pl, P2) .
  • B2, B4, B6, B7 into the sixth priority level full periodicity of each layer is achieved.
  • the frames are dropped by their priority level, with frames of the lowest level (L5 406) dropped first and the frames of the highest level (LO 401) dropped last.
  • temporal prioritization may be used to significantly reduce the degradation due to dropped frames when dropping is necessary.
  • the example embodiment of Fig. 4 is intended to be illustrative. Clearly, the concepts of this embodiment can be expanded.
  • m is the number of frames in a GOP; while n is the number of frames in a P period)
  • the number of P period in the GOP is: m
  • labeling of the packets and assigning the packets to a level using transport layer identification is effected.
  • the non-scalable video content can be assigned to multiple priority levels.
  • a generic temporal scalability can be established this way with minimum complexity.
  • This temporal scalability established is illustratively for MPEG-coded content and facilitates the priority-oriented streaming strategy.
  • the layers with lower priorities can be dropped according to the available bandwidth to increase the chance that the layers with higher priority get through the degraded channel.
  • This streaming strategy is usually referred to as priority-based dropping. Because the video content is assigned to a priority level according to the dependency, by using the priority-based dropping, the VOPs are dropped before their reference VOPs. This way the severe quality loss caused by prediction drift can be significantly reduced if not substantially eliminated.
  • FIG. 5 is a schematic diagram depicting an illustrative streaming system 500 using Real-time Transport Protocol (RTP) /IP transport.
  • RTP Real-time Transport Protocol
  • Each one of the priority levels described previously may be carried in one RTP session forming a virtual channel to facilitate adaptation.
  • This generic multi-channel streaming architecture allows various schemes of adaption algorithms. These include, but are not limited to: server-driven adaptation, receiver-driven adaptation, and/or via lower layer QoS provisioning such as Mac Qos provided with Wi-Fi WLAN products.
  • the architecture of the streaming system 500 comprises of a media server 501 (e.g., may be co-located with an access point of a wireless network) , an IP network, and at least one media client 502 (e.g., wireless stations) .
  • the video frames are transmitted by a transmitter 503 of the media server 501 to a receiver (s) 504 at the media client (s) in an on-demand manner.
  • An encoder 505 encodes the video frames as referenced previously and provided the frames to the transmitter 503. It is noted that using similar components and methods, the client 502 may transmit video data to the server 501; or to other clients 502 either directly or via the server.
  • the receiver 503 is illustratively a prioritized multi-level receiver with single layer decoder 505.
  • the depacketized bitstream is first multiplexed to the corresponding decoder DEC 506 based on its frame type for decoding. Reference frames are stored after reconstruction and used in motion compensation for the construction of other frames that depend on them.
  • the decoded/reconstructed frames are ordered according to their display order and sent to renderer (not shown) via a multiplexer (not shown) .
  • the dropping of frames that may be necessary in a network due to bandwidth considerations may be effected by dropping levels from lowest priority to highest priority as described previously.
  • a lower networking layer such as a MAC layer of the server 501 drops the selected packets using prioritized dropping methods of the example embodiment and according to their transport id/or labeling.
  • a selected frames or an entire level may be dropped for a period of time. If, in time, channel conditions improve to allow more levels to be transmitted, the dropped level may be added back.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method of communication includes providing single layer content coded video frames (101-111, 202, 203, 205-208, 210-219) . The method also includes selectively assigning each of the video frames to one of a plurality of levels. In addition, the method includes selectively transmitting some or all of the video frames in a prioritized manner based on bandwidth limitations. A video link (500) is also described.

Description

Wireless Video Streaming Using Single Layer Coding and
Prioritized Streaming
The use of wireless connectivity in communications continues to increase. Devices that benefit from wireless connectivity include portable computers, portable handsets, personal digital assistants (PDAs) and entertainment systems, to name just a few. While advances in wireless connectivity have lead to faster and more reliable communications over the wireless medium, certain technologies have lagged others in both quality and speed. One such technology is video technology.
Because the bandwidth requirements of video signals are comparatively high, video communication can tax the bandwidth limits of known wireless networks. Moreover, the bandwidth of a wireless network may depend on the time of the transmission as well as the location of the transmitter. Furthermore, interference from other wireless stations, other networks, wireless devices operating in the same frequency spectrum as well as other environmental factors can degrade video signals transmitted in a wireless medium.
In addition to the considerations of bandwidth and interference, video signal quality can suffer as a result of loss of data packets. To this end, digital video content is often transmitted in packets of data, which include compressed content that is coded using transform coding with motion prediction. The packets are then transmitted in a stream of packets often referred to as video streaming. However, during transmission, a lost or erroneous video packet can inhibit the decoding process at the receiver.
As is known, drift is caused by the loss of data packets belonging to reference video frames. This loss can prevent a decoder at a receiver site from correctly decoding reference video frames.
Regardless of the source, lost or erroneous packet data that belong to reference video frames can result in the inability to properly reconstruct a number of frames of video subsequent to the erroneous or lost packets. This is known as prediction drift. Prediction drift occurs when the reference video frames used to compensate motion in subsequent frames at the receiver's decoder do not match those used at the coder of the transmitter. Ultimately, this can result in higher distortion in video quality or reduced or unacceptable video quality.
Certain known techniques have been explored to address the problems of varying bandwidth and channel conditions and their impact on video quality. One known technique is known as scalable video content coding technology, which is also known as layered video content coding. These technologies include motion picture enhancement group (MPEG) -2/4 temporal, spatial and SNR scalability, MPEG-4 FGS and data partition and wavelet video coding technologies.
In scalable video coding technology, the video content is compressed and prioritized into bitstreams. In layered video streaming systems that make use of scalable video streaming technologies, the bitstreams are packetized/partitioned into separate sub-bitstreams (layers) having different priorities. If the bandwidth of the wireless channel is insufficient, the content layers may be dropped, allowing the base layers to be transmitted. While the scalable video coding technology provides benefits over known single-layer technologies, many receivers do not include decoders that are compatible with the multi-layer coded video content. Thus, the need remains to improve video transmission with single layer content coding. What is needed, therefore, is a method and apparatus of wireless communication that overcomes at least the shortcomings of known methods and apparati described above.
In accordance with an example embodiment, a method of communication includes providing single layer content coded video frames. The method also includes selectively assigning each of the video frames to one of a plurality of levels. In addition, the method includes selectively transmitting some or all of the video frames based on bandwidth limitations. In accordance with another example embodiment, a communication link includes a receiver and a transmitter. An encoder is connected to the transmitter and is adapted to encode video signals into a plurality of single layer content coded video frames. In addition, the encoder is adapted to assign each of the video frames to one of a plurality of levels.
The example embodiments are best understood from the following detailed description when read with the accompanying drawing figures. It is emphasized that the various features are not necessarily drawn to scale. In fact, the dimensions may be arbitrarily increased or decreased for clarity of discussion.
Fig. 1 is a schematic diagram of a dependency tree in accordance with an example embodiment.
Fig. 2 is a schematic diagram of a dependency tree in accordance with an example embodiment.
Fig. 3 is a schematic diagram of a dependency tree in accordance with an example embodiment.
Fig. 4 is a schematic diagram of a dependency tree in accordance with an example embodiment.
Fig. 5 is a schematic diagram of a wireless video link in accordance with an example embodiment. In the following detailed description, for purposes of explanation and not limitation, example embodiments disclosing specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one having ordinary skill in the art having had the benefit of the present disclosure, that the present invention may be practiced in other embodiments that depart from the specific details disclosed herein. Moreover, descriptions of well-known devices, methods and materials may be omitted so as to not obscure the description of the present invention. Wherever possible, like numerals refer to like features throughout.
Briefly, the example embodiments relate to methods of transmitting and receiving video streams. In example embodiments, the transmission and reception of video streams are over a wireless link. Illustratively, video data are in a single-layer coded video stream that is packetized and arranged in a dependency structure based on priority levels. To wit, the single-layer coded video bit stream is prioritized based on dependency on temporally previous frames.
Beneficially, the methods and related apparati substantially prevent prediction-drift in streaming video. Moreover, the methods and related apparati of the example embodiments foster adaptation of video communications in wireless networks having time and location dependent bandwidth. In addition, the methods and related apparati of the example embodiments enable improved streaming video transmission in networks and links having a standard- compliant conventional single-layer decoder. These and other benefits will become clearer to one of ordinary skill in the art as the present description continues. It is noted that the description of example embodiments include coding of video frames in accordance with known MPEG (or its progeny) or known H.264 techniques. It is noted that these methods are merely illustrative and that other encoding methods are contemplated.
In addition, the wireless link is illustratively in compliance with the IEEE 802.11 protocol, its progeny and proposed amendments. Again, this is merely illustrative and it is contemplated that the methods and apparati of the example embodiments may be used in other wireless systems. For example, the wireless link may be a satellite wireless digital video broadcasting link, including high-definition terrestrial TV. Moreover, the methods and apparati of the example embodiments may be used to effect video transmission over wireless mobile network such as a third generation partnership project (3GPP) . It is noted that in addition to wireless links, the methods and apparati of the example embodiment may be used in connection with wired technologies such as video conferencing/videophony over telephone line and broadband IP networks.
Again, it is emphasized that the methods and apparati of the example embodiments may be used in conjunction with still other alternative encoding techniques and wireless protocols; and that these alternatives will be readily apparent to one of ordinary skill in the art, who has had the benefit of the present disclosure.
Fig. 1 is a schematic representation of a dependency tree 100 in accordance with an illustrative embodiment. The tree 100 includes a plurality of frames each including one or more packets encoded via a single-layer motion estimating video coding method, such as MPEG or H.264.
As will become clearer as the present description continues, the frames may be arranged in levels based on priority. Illustratively, a first priority level is the highest priority level; a second priority level is the next highest priority level; and a third priority level is the lowest priority level. It is emphasized that the use of three priority levels is merely illustrative, and more than three levels may be used. Moreover, the priority levels may be further categorized by temporal intervals.
The first priority level includes packets containing compressed data of intra-coded video frames or video object plane (IVOP or I frame) . For example, the Il frame 101 includes the intra-coded video data of a frame at a particular instant of time. As represented by the time axis in Fig. 1, frame 101 is an initial frame of a first Group of Picture (GOP) single-layer content coded video stream.
The second priority level of the example embodiment includes prediction coded video frames or VOP (PVOP) coded video frames. For example, a Pl frame 102 is in this second priority level. As is known, compared to Il frame 101, the Pl frame 102 includes only additional information (e.g. non- static video data) . To this end, the Pl frame 102 does not include redundant video information. Thus, the Pl frame 102 includes motion in the video not found in the video frame of Il 101. Moreover, the Pl frame 101 depends on the Il frame as a reference frame, as the Il frame is used to predict the Pl frame. As is known, the frame from which a subsequent frame depends is required for video reconstruction upon decoding at a receiver.
Similarly, a P2 frame 103 is in the second priority level of the example embodiment, and includes additional data (e.g., non-static video data) not contained in Pl frame 101; and a P3 frame 104 is in the second priority level, and includes additional data (e.g. non-static video data) not contained in the P2 frame 103. Clearly, the P2 frame 103 depends from the Pl frame 102 and the P3 frame 104 depends from the P2 frame 103.
The third priority level includes bidirectional prediction coded video frames or video object plane (BVOP) . These frames depend from both the Il frame and the P2 and P3 frames. For example, the Bl frame 105 depends from the P2 frame and the Pl frame. Similarly, B frames 106-110 selectively depend from the II, Pl, P2, and P3 as shown by the arrows from one frame to another. For example, frames B3 and B4 depend from the Pl frame and the P2 frame directly and from the Il frame indirectly. As such, the B3 frame has additional data (e.g. non-static information) relative to a combination of the Pl and P2 frames.
From the above description of an example embodiment, and as indicated by the arrows, the higher priority level frames are used to predict the frames of the lower priority levels of the first GOP.
A second intra-coded frame 12 111 begins a second GOP single-layer content coded video stream. This second GOP frame is later in time than the first GOP as indicated by the time axis shown. Similar to the Il frame, the second 12 frame is in the first priority level, and all prediction frames and bidirectional prediction frames in the second and third priority levels depend from this reference frame. Thus, he higher priority level frames are used to predict the frames of the lower priority levels of the first GOP.
It is noted that each frame includes packetized video data. There may be one or more video network packets in a frame, or more than one frame may be comprised of one video network packet. For example, the Il frame 101 may include two video packets; the P2 frame 103 may include one packet; and the Bl frame 105 and the B2 frame 106 may be comprised of a single packet. Accordingly, the I frames have the most data; the P frames have fewer data than the I frames, and the B frames have the least data.
As can be appreciated, if a higher priority frame is lost because of a bandwidth constraint, or other factor, those frames that depend from the lost higher priority frame cannot be motion-compensated when decoding in the receiver and the state of the video remains at the temporal level of the last priority frame that has not been dropped. In the extreme example, if the Il frame were dropped, it is not possible to reconstruct the video at the receiver and either indiscernible video is compiled using the Pl, P2 and B1-B6 frames; or the viewing screen is intentionally left blank; or the viewing screen shows the last reconstructed image.
Contrastingly, if a B frame of the third priority level is dropped, because there are no frames that depend from the B frame, the only loss is in temporal resolution, and not a lapse in the video. For example, if the Bl frame 105 and the B2 frame are dropped, the video image is that of the Pl frame. All motion subsequent to the Pl frame (in frames that depend from the to the Pl frames) is lost. Accordingly, in the present dependency tree, I frames are the most essential frames, the P-frames are the next-most essential and the B- frames are the least essential for motion-compensation and, subsequently, video reconstruction.
In order to mitigate the potential loss of video or video quality, example embodiments include a selective priority level-based dropping of frames of streaming video to increase the likelihood that layers of higher priority are transmitted through the degraded channel. The dropping of frames from lowest priority to highest priority is effected in accordance with the available bandwidth of the channel. While the dropping of streaming video frames may result in lower temporal resolution of the resultant video, the methods of the example embodiments provide an improved video quality in reduced bandwidth networks compared to known methods. Some illustrative frame dropping strategies are described presently.
Fig. 2 is a schematic representation of a dependency tree of single-layer content coded video in accordance with an example embodiment. In the present example embodiment, the prioritization is based on the VOP type. In particular, a first priority level LO 201 includes the IVOP frames, Il 202 and 12 203; a second priority level 204 includes the P frames, Pl 205, P2 206, P3 207 and P 208; and a third priority level 209 includes the B frames, Bl 210 through BlO 219 as shown.
The prioritization scheme of the example embodiment is used to determine the order of dropping of frames in the event that the bandwidth of a wireless medium will not support the bandwidth requirements of the GOP. In general, the illustrative method mandates that a frame is not dropped until all frames that depend on the frame are dropped. As such, lapses of frames in the chain of dependent frames are substantially avoided, which reduces video quality loss. Thus, while the temporal resolution of a video stream may be reduced, the complete loss of the video is substantially avoided. The prioritization based dependency and dropping of an example embodiment are described presently.
As described previously, the I frames are the more essential to the video stream than P frames; and the P frames are more essential than B frames. Thus the non-scalable (single-layer content coded video) bitstream can be arranged with frames in three priority levels based on the VOP types. If a bandwidth limitation of the first stream (commencing with II) mandates a dropping of frames, the method of the present illustrative embodiment requires the dropping of frames based on dependency. To this end, the frames that have no frames dependent therefrom (the B VOPs) are dropped first.
Next, the frames with fewer frames dependent thereon are dropped next. Illustratively, the P frames are dropped next. Moreover, there is a sub-priority consideration in the dropping of P frames. To wit, the P3 frame 207, having fewer frames dependent thereon than the P2 frame 206 is dropped before the P2 frames. Stated differently, the P frames of the second priority level Ll 204 of the example embodiment have a serial dependence shown by the arrows. As such, a frame is not dropped until all frames that depend on the frame are dropped. For example, the P2 frame 206 is not dropped until frames B3 212 through B6 215 and P3 frame 207 are dropped. As we know, the GOP structure is repeated throughout the entire MPEG bitstream. Thus the original MPEG bitstream displays some degree of periodicity.
Fig. 3 is another prioritization scheme based on dependency of the frames in accordance with another example embodiment. The prioritization method of the present example embodiment includes common features with features described in connection with the example embodiment of Fig. 2. Wherever practical, common features are not repeated so as to avoid obscuring the description of the example embodiments.
As referenced previously, the method of prioritization based on dependency of the example embodiments, a frame is not dropped until all frames that depend on the frame are dropped. In the present example embodiment, dependency among frames of the same type is addressed. For example, because some P frames depend on other P frames, the prioritization of the levels must include prioritizing the P frames to exploit this type of dependency. This may be referred to as an inter-frame dependency. Of course, the prioritization based on the inter-frame dependency of the P frames is chosen merely to illustrate this prioritization method. Clearly, other frames may be similarly prioritized.
The GOPs of the video stream of the example embodiment of Fig. 3 are arranged a first priority level LO 301, a second priority level Ll 302, a third priority level L2 303, a fourth priority level L3 304 and a fifth priority level L4 305. Within each priority level are frames, which are located in their respective levels based on their relative importance. The first level LO 301 includes the most important frames; in this case Il frame 306 and 12 frame 307. The second level Ll 302 includes Pl frame 308 and P frame 309. The third level L2 includes P2 frame 310, which depend from Pl frame 308. The fourth level L3 304 includes P3 frame 311, which depends from the P2 frame 310; and the fifth level L4 305 includes Bl frame 312 through B frame 321.
According to the present example embodiment, the frames of the fifth priority level L4 305 are dropped first, followed by those in the fourth priority level L3 304, and so forth. Beneficially, the frames are assigned to a priority level based on their dependence on frames in higher priority. In this manner, a prioritization for dropping frames of the same type is provided.
Fig. 4 is a schematic diagram of a temporal prioritization scheme with a constant frame interval in accordance with an example embodiment. As will become clearer as the present description continues, in the present example embodiment, the priority levels are also categorized by temporal intervals.
In the example embodiment described in connection with Fig. 4, the periodic property of the original GOP is exploited. To this end, it may be useful to preserve the periodicity for each individual layer to simplify the system design. For example, in dependency prioritization scheme such as described in connection with Figs. 2 and 3, the priority levels that contain I, P frames are periodic with all the VOP transmitted evenly in each individual level. However, the priority level that contains only the B VOPs are not periodic, which complicates the system design. To this end, in the example embodiment of Fig. 2, it is clear that the Bl frame 210 and the B2 frame 211 lag temporally behind the P2 frame. By further partitioning the B VOPs within each P period into multiple layers according to location along the time axis, full periodicity can be achieved.
In the example embodiment of Fig. 4, the are six priority levels: a first priority level LO 401, a second priority level Ll 402, a third priority level L2 403, a fourth priority level L3 404, a fifth priority level L4 405 and a sixth priority level L5 406. The first priority 401 includes frames Il 407 and 12 408; the second priority level 402 include frames Pl 409 and P 410; the third level 403 includes frame P2 411; the fourth frame 404 includes frames P3 412; the fifth frame includes frames Bl 413, B3 414, B5 415, B7 416 and B 417; and the sixth level includes frames B2 418, B4 419, B6 420, B8 421 and B 422.
Thus, the B frames are temporally prioritized. For example Bl and B2 are in the same P period of (Pl, II) ; and B3 and B4 are in the same P period of (Pl, P2) . Hence, by partitioning B2, B4, B6, B7 into the sixth priority level, full periodicity of each layer is achieved.
As with previous example embodiments, the frames are dropped by their priority level, with frames of the lowest level (L5 406) dropped first and the frames of the highest level (LO 401) dropped last. In this manner temporal prioritization may be used to significantly reduce the degradation due to dropped frames when dropping is necessary. The example embodiment of Fig. 4 is intended to be illustrative. Clearly, the concepts of this embodiment can be expanded. For video coded with MPEG employing a GOP structure of (m, n) and constant frame rate f, (m is the number of frames in a GOP; while n is the number of frames in a P period), the number of P period in the GOP is: m
P=— ; n the number of layers generated using constant-interval layering scheme is:
L=p+n—\; and the resulting constant frame rate fir for layer / is :
Figure imgf000015_0001
In accordance with an example embodiment, in order to facilitate adaptive or prioritized transmission of the different priority levels, which will be carried in multiple transports, labeling of the packets and assigning the packets to a level using transport layer identification is effected.
Employing any of the previously described example embodiments, the non-scalable video content can be assigned to multiple priority levels. Thus a generic temporal scalability can be established this way with minimum complexity. This temporal scalability established is illustratively for MPEG-coded content and facilitates the priority-oriented streaming strategy. When encountering channel degradation, the layers with lower priorities can be dropped according to the available bandwidth to increase the chance that the layers with higher priority get through the degraded channel. This streaming strategy is usually referred to as priority-based dropping. Because the video content is assigned to a priority level according to the dependency, by using the priority-based dropping, the VOPs are dropped before their reference VOPs. This way the severe quality loss caused by prediction drift can be significantly reduced if not substantially eliminated.
Figure 5 is a schematic diagram depicting an illustrative streaming system 500 using Real-time Transport Protocol (RTP) /IP transport. Each one of the priority levels described previously may be carried in one RTP session forming a virtual channel to facilitate adaptation. This generic multi-channel streaming architecture allows various schemes of adaption algorithms. These include, but are not limited to: server-driven adaptation, receiver-driven adaptation, and/or via lower layer QoS provisioning such as Mac Qos provided with Wi-Fi WLAN products.
Illustratively, the architecture of the streaming system 500 comprises of a media server 501 (e.g., may be co-located with an access point of a wireless network) , an IP network, and at least one media client 502 (e.g., wireless stations) . The video frames are transmitted by a transmitter 503 of the media server 501 to a receiver (s) 504 at the media client (s) in an on-demand manner. An encoder 505 encodes the video frames as referenced previously and provided the frames to the transmitter 503. It is noted that using similar components and methods, the client 502 may transmit video data to the server 501; or to other clients 502 either directly or via the server.
In the illustrative system 500, the receiver 503 is illustratively a prioritized multi-level receiver with single layer decoder 505. Using known techniques, the depacketized bitstream is first multiplexed to the corresponding decoder DEC 506 based on its frame type for decoding. Reference frames are stored after reconstruction and used in motion compensation for the construction of other frames that depend on them. The decoded/reconstructed frames are ordered according to their display order and sent to renderer (not shown) via a multiplexer (not shown) .
In accordance with an example embodiment, the dropping of frames that may be necessary in a network due to bandwidth considerations may be effected by dropping levels from lowest priority to highest priority as described previously. Illustratively, a lower networking layer such as a MAC layer of the server 501 drops the selected packets using prioritized dropping methods of the example embodiment and according to their transport id/or labeling. As such, a selected frames or an entire level may be dropped for a period of time. If, in time, channel conditions improve to allow more levels to be transmitted, the dropped level may be added back.
It is contemplated that the various methods, devices and networks described in conjunction with transmitting video data of the example embodiments can be implemented in hardware and software. It is emphasized that the various methods, devices and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those skilled in the art can implement the various example methods, devices and networks in determining their own techniques and needed equipment to effect these techniques, while remaining within the scope of the appended claims.

Claims

CLAIMS :
1. A method of video communication, the method comprising: providing single layer content coded video frames (101-111) ; selectively assigning each of the frames to one of a plurality of levels (201,204,209); and based on bandwidth limitations, selectively transmitting some or all of the frames based on their level.
2. A method as recited in claim 1, wherein the selective assigning further comprises establishing a priority for each of the plurality of levels.
3. A method as recited in claim 2, wherein the priority levels are based on a dependency of frames on other frames.
4. A method as recited in claim 2, wherein a frame in a higher priority level is dropped after a frame in a lower priority level that depends on the frame in the higher priority level is dropped.
5. A method as recited in claim 2, wherein the priority of the levels is based on a periodicity of the frames.
6. A method as recited in claim 5, wherein the priority of levels substantially preserves the periodicity of the frames.
7. A method as recited in claim 2, wherein the transmitting further comprises dropping certain frames based on the bandwidth considerations and wherein none of the certain frames frame is dropped until all frames that depend on the certain frames are dropped.
8. A method as recited in claim 7, wherein a highest priority level includes packets in an intra video object plane (IVOP) , a lower priority level includes packets in a prediction video object plane (PVOP) and a lowest priority level includes packets in a bidirectional prediction video object plane (BVOP) .
9. A method as recited in claim 8, wherein method further comprises partitioning the PVOPs within a group of picture (GOP) into certain levels of the plurality of levels based on an inter-frame dependency.
10. A method as recited in claim 1, wherein the method further comprising: providing a receiver (504) with a single layer decoder (506); decoding the single-layer content video packets; and reconstructing the video.
11. A method as recited in claim 1, wherein the communication is wireless.
12. A method as recited in claim 1, wherein the plurality of priority levels is also categorized by temporal intervals.
13. A communication link (500), comprising: a transmitter (503) ; a receiver (504) ; and an encoder (505) connected to the transmitter, wherein the encoder is adapted to encode video signals into a plurality of single layer content coded video frames, and the encoder is adapted to assign each of the video frames to one of a plurality of levels (201, 204, 209) .
14. A communication link as recited in claim 13, wherein the link is a wireless link.
15. A communication link as recited in claim 14, wherein based on bandwidth limitations of the link, the transmitter selectively transmits some or all of the frames based on their level.
16. A communication link as recited in claim 13, further comprising a decoder (506) , which decodes the single-layer content video frames.
17. A communication link as recited in claim 13, wherein the plurality of levels are prioritized.
18. A communication link as recited in claim 17, wherein a highest priority level includes packets in an intra video object plane (IVOP) , a lower priority level includes packets in a prediction video object plane (PVOP) and a lowest priority level includes packets in a bidirectional prediction video object plane (BVOP) .
19. A communication link as recited in claim 17, wherein the PVOPs within a group of pictures (GOP) are further partitioned into multiple priority-levels based on an inter- frame dependency.
20. A communication link as recited in claim 17, wherein the plurality of priority levels is categorized by temporal intervals.
21. A communication link as recited in claim 17, wherein the priority of levels substantially preserves a periodicity of the frames.
22. A communication link as recited in claim 15, wherein the transmitter drops certain frames based on the bandwidth considerations and wherein none of the certain frames frame is dropped until all frames that depend on the certain frames are dropped.
PCT/IB2005/054140 2004-12-10 2005-12-08 Wireless video streaming using single layer coding and prioritized streaming WO2006061801A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP05823206A EP1825684A1 (en) 2004-12-10 2005-12-08 Wireless video streaming using single layer coding and prioritized streaming
JP2007545071A JP2008523689A (en) 2004-12-10 2005-12-08 Wireless video streaming and prioritized streaming using single layer coding
US11/721,225 US20090232202A1 (en) 2004-12-10 2005-12-08 Wireless video streaming using single layer coding and prioritized streaming

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63524004P 2004-12-10 2004-12-10
US60/635,240 2004-12-10

Publications (1)

Publication Number Publication Date
WO2006061801A1 true WO2006061801A1 (en) 2006-06-15

Family

ID=36147608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/054140 WO2006061801A1 (en) 2004-12-10 2005-12-08 Wireless video streaming using single layer coding and prioritized streaming

Country Status (5)

Country Link
US (1) US20090232202A1 (en)
EP (1) EP1825684A1 (en)
JP (1) JP2008523689A (en)
CN (1) CN101073268A (en)
WO (1) WO2006061801A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7668170B2 (en) 2007-05-02 2010-02-23 Sharp Laboratories Of America, Inc. Adaptive packet transmission with explicit deadline adjustment
US7684430B2 (en) 2006-09-06 2010-03-23 Hitachi, Ltd. Frame-based aggregation and prioritized channel access for traffic over wireless local area networks
US7706384B2 (en) 2007-04-20 2010-04-27 Sharp Laboratories Of America, Inc. Packet scheduling with quality-aware frame dropping for video streaming
US7953880B2 (en) 2006-11-16 2011-05-31 Sharp Laboratories Of America, Inc. Content-aware adaptive packet transmission
CN102187667A (en) * 2008-08-26 2011-09-14 Csir公司 Method for switching from a first coded video stream to a second coded video stream
EP2377277A2 (en) * 2008-12-10 2011-10-19 Motorola Solutions, Inc. Method and system for deterministic packet drop

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007014216A2 (en) 2005-07-22 2007-02-01 Cernium Corporation Directed attention digital video recordation
EP1971100A1 (en) * 2007-03-12 2008-09-17 Siemens Networks GmbH & Co. KG Method and device for processing data in a network component and system comprising such a device
US9215467B2 (en) 2008-11-17 2015-12-15 Checkvideo Llc Analytics-modulated coding of surveillance video
KR101632076B1 (en) * 2009-04-13 2016-06-21 삼성전자주식회사 Apparatus and method for transmitting stereoscopic image data according to priority
US8531961B2 (en) 2009-06-12 2013-09-10 Cygnus Broadband, Inc. Systems and methods for prioritization of data for intelligent discard in a communication network
US8627396B2 (en) 2009-06-12 2014-01-07 Cygnus Broadband, Inc. Systems and methods for prioritization of data for intelligent discard in a communication network
WO2010144833A2 (en) 2009-06-12 2010-12-16 Cygnus Broadband Systems and methods for intelligent discard in a communication network
US8823782B2 (en) 2009-12-31 2014-09-02 Broadcom Corporation Remote control with integrated position, viewer identification and optical and audio test
US8854531B2 (en) * 2009-12-31 2014-10-07 Broadcom Corporation Multiple remote controllers that each simultaneously controls a different visual presentation of a 2D/3D display
US9247286B2 (en) 2009-12-31 2016-01-26 Broadcom Corporation Frame formatting supporting mixed two and three dimensional video data communication
US20110157322A1 (en) 2009-12-31 2011-06-30 Broadcom Corporation Controlling a pixel array to support an adaptable light manipulator
JP5768332B2 (en) * 2010-06-24 2015-08-26 ソニー株式会社 Transmitter, receiver and communication system
CN101938341B (en) * 2010-09-17 2012-12-05 东华大学 Cross-node controlled online video stream selective retransmission method
CN102281436A (en) * 2011-03-15 2011-12-14 福建星网锐捷网络有限公司 Wireless video transmission method and device, and network equipment
US20130058406A1 (en) * 2011-09-05 2013-03-07 Zhou Ye Predictive frame dropping method used in wireless video/audio data transmission
CN103124348A (en) * 2011-11-17 2013-05-29 蓝云科技股份有限公司 Predictive frame dropping method used in wireless video/audio data transmission
CN103780917B (en) * 2012-10-19 2018-04-13 上海诺基亚贝尔股份有限公司 Method and network unit for the packet of intelligently adapted video
US9578333B2 (en) 2013-03-15 2017-02-21 Qualcomm Incorporated Method for decreasing the bit rate needed to transmit videos over a network by dropping video frames
JP5774652B2 (en) 2013-08-27 2015-09-09 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
CN103475878A (en) * 2013-09-06 2013-12-25 同观科技(深圳)有限公司 Video coding method and encoder
CN104378602B (en) * 2014-11-26 2017-06-23 福建星网锐捷网络有限公司 Video transmission method and device
KR102013403B1 (en) * 2015-05-27 2019-08-22 구글 엘엘씨 Spherical video streaming
CN105791260A (en) * 2015-11-30 2016-07-20 武汉斗鱼网络科技有限公司 Network self-adaptive stream media service quality control method and device
US9978181B2 (en) * 2016-05-25 2018-05-22 Ubisoft Entertainment System for virtual reality display
CN106973066A (en) * 2017-05-10 2017-07-21 福建星网智慧科技股份有限公司 H264 encoded videos data transmission method and system in a kind of real-time communication
JP6508270B2 (en) * 2017-09-13 2019-05-08 ソニー株式会社 Transmission apparatus, transmission method, reception apparatus and reception method
CN108307194A (en) * 2018-01-03 2018-07-20 西安万像电子科技有限公司 The transfer control method and device of image coding
KR20220124031A (en) 2021-03-02 2022-09-13 삼성전자주식회사 An electronic device for transceiving video packet and operating method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072376A1 (en) * 2001-10-12 2003-04-17 Koninklijke Philips Electronics N.V. Transmission of video using variable rate modulation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852565A (en) * 1996-01-30 1998-12-22 Demografx Temporal and resolution layering in advanced television
DE60020672T2 (en) * 2000-03-02 2005-11-10 Matsushita Electric Industrial Co., Ltd., Kadoma Method and apparatus for repeating the video data frames with priority levels
US20020147834A1 (en) * 2000-12-19 2002-10-10 Shih-Ping Liou Streaming videos over connections with narrow bandwidth
US7483487B2 (en) * 2002-04-11 2009-01-27 Microsoft Corporation Streaming methods and systems
US9544602B2 (en) * 2005-12-30 2017-01-10 Sharp Laboratories Of America, Inc. Wireless video transmission system
US7965771B2 (en) * 2006-02-27 2011-06-21 Cisco Technology, Inc. Method and apparatus for immediate display of multicast IPTV over a bandwidth constrained network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072376A1 (en) * 2001-10-12 2003-04-17 Koninklijke Philips Electronics N.V. Transmission of video using variable rate modulation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ISOVIC D ET AL: "Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions", REAL-TIME SYSTEMS, 2003. PROCEEDINGS. 15TH EUROMICRO CONFERENCE ON 2-4 JULY 2003, PISCATAWAY, NJ, USA,IEEE, 2 July 2003 (2003-07-02), pages 73 - 82, XP010644819, ISBN: 0-7695-1936-9 *
KROPFBERGER M ET AL: "Quality variations of different priority-based temporal video adaptation algorithms", MULTIMEDIA SIGNAL PROCESSING, 2004 IEEE 6TH WORKSHOP ON SIENA, ITALY SEPT. 29 - OCT. 1, 2004, PISCATAWAY, NJ, USA,IEEE, 29 September 2004 (2004-09-29), pages 183 - 186, XP010802116, ISBN: 0-7803-8578-0 *
SHAO H-R ET AL: "User-aware object-based video transmission over the next generation Internet", SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 16, no. 8, May 2001 (2001-05-01), pages 763 - 784, XP004249805, ISSN: 0923-5965 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7684430B2 (en) 2006-09-06 2010-03-23 Hitachi, Ltd. Frame-based aggregation and prioritized channel access for traffic over wireless local area networks
US7953880B2 (en) 2006-11-16 2011-05-31 Sharp Laboratories Of America, Inc. Content-aware adaptive packet transmission
US7706384B2 (en) 2007-04-20 2010-04-27 Sharp Laboratories Of America, Inc. Packet scheduling with quality-aware frame dropping for video streaming
US7668170B2 (en) 2007-05-02 2010-02-23 Sharp Laboratories Of America, Inc. Adaptive packet transmission with explicit deadline adjustment
CN102187667A (en) * 2008-08-26 2011-09-14 Csir公司 Method for switching from a first coded video stream to a second coded video stream
EP2377277A2 (en) * 2008-12-10 2011-10-19 Motorola Solutions, Inc. Method and system for deterministic packet drop
EP2377277A4 (en) * 2008-12-10 2013-08-28 Motorola Solutions Inc Method and system for deterministic packet drop

Also Published As

Publication number Publication date
JP2008523689A (en) 2008-07-03
US20090232202A1 (en) 2009-09-17
EP1825684A1 (en) 2007-08-29
CN101073268A (en) 2007-11-14

Similar Documents

Publication Publication Date Title
US20090232202A1 (en) Wireless video streaming using single layer coding and prioritized streaming
Apostolopoulos et al. Video streaming: Concepts, algorithms, and systems
KR100855643B1 (en) Video coding
KR100932692B1 (en) Transmission of Video Using Variable Rate Modulation
Radha et al. Scalable internet video using MPEG-4
CA2562172C (en) Method and apparatus for frame prediction in hybrid video compression to enable temporal scalability
US7760661B2 (en) Apparatus and method for generating a transmit frame
Van der Schaar et al. Multiple description scalable coding using wavelet-based motion compensated temporal filtering
KR20040069360A (en) Targeted scalable video multicast based on client bandwidth or capability
KR100952185B1 (en) System and method for drift-free fractional multiple description channel coding of video using forward error correction codes
US20060005101A1 (en) System and method for providing error recovery for streaming fgs encoded video over an ip network
Fiandrotti et al. Traffic prioritization of H. 264/SVC video over 802.11 e ad hoc wireless networks
EP1931148B1 (en) Transcoding node and method for multiple description transcoding
EP2308215B1 (en) Thinning of packet-switched video data
WO2010014239A2 (en) Staggercasting with hierarchical coding information
Hassan et al. Adaptive and ubiquitous video streaming over Wireless Mesh Networks
Gao et al. Real-Time scheduling for scalable video coding streaming system
Chen et al. Error-resilient video streaming over wireless networks using combined scalable coding and multiple-description coding
Mobasher et al. Cross layer image optimization (CLIO) for wireless video transmission over 802.11 ad multi-gigabit channels
Ali et al. Distortion‐Based Slice Level Prioritization for Real‐Time Video over QoS‐Enabled Wireless Networks
Chung A novel selective frame discard method for 3D video over IP networks
Wang et al. Error control in video communications
Radakovic et al. Low complexity adaptation of H. 264/MPEG-4 SVC for multiple description video coding
Woods et al. Streaming Video Compression for Heterogeneous Networks
Maarif et al. Video streaming over wireless LAN using scalable extension of H. 264/AVC

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005823206

Country of ref document: EP

Ref document number: 2007545071

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200580041963.5

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 11721225

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2005823206

Country of ref document: EP