GB2562243A - Channel switching - Google Patents

Channel switching Download PDF

Info

Publication number
GB2562243A
GB2562243A GB1707373.5A GB201707373A GB2562243A GB 2562243 A GB2562243 A GB 2562243A GB 201707373 A GB201707373 A GB 201707373A GB 2562243 A GB2562243 A GB 2562243A
Authority
GB
United Kingdom
Prior art keywords
video
level
channel
reproduction quality
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1707373.5A
Other versions
GB2562243B (en
GB201707373D0 (en
Inventor
Maarek Ilan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
V Nova International Ltd
Original Assignee
V Nova International Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by V Nova International Ltd filed Critical V Nova International Ltd
Priority to GB1707373.5A priority Critical patent/GB2562243B/en
Publication of GB201707373D0 publication Critical patent/GB201707373D0/en
Publication of GB2562243A publication Critical patent/GB2562243A/en
Application granted granted Critical
Publication of GB2562243B publication Critical patent/GB2562243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4383Accessing a communication channel
    • H04N21/4384Accessing a communication channel involving operations to reduce the access time, e.g. fast-tuning for reducing channel switching latency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video receiver 200, 201, 202 receives and outputs a first channel of encoded video data 301, upon receiving a signal to switch from the first channel of encoded video data to a second channel of encoded video data 302; the second channel comprises a plurality of video frames, each video frame encoded in hierarchical layers comprising a base layer and enhancement layer(s); the base layer is decodable to enable each video frame to be presented at a base level of video reproduction quality; the enhancement layer(s), together with the base layer and preceding enhancement layer(s), are decodable to enable each video frame to be presented at an increasingly enhanced level of quality; upon receiving first decodable layer(s) of the second channel, which would result in a first video frame with a quality less than an expected level of video reproduction quality, the layer(s) are decoded and presented. The first decodable layer(s) may be the base layer and may be decoded as soon as they are received. Second decodable layer(s) may be received which provide a higher level of video quality. The expected level of video quality may be based on the bandwidth available at the video receiver.

Description

CHANNEL SWITCHING
FIELD OF THE INVENTION
The invention relates to channel switching in a video receiver, and particularly but not exclusively to switching television or streaming video channels very quickly and reliably.
BACKGROUND OF THE INVENTION
It is an aim in the television broadcasting field to achieve channel switching times on settop boxes of less than 700 ms. This applies to terrestrial (over-the-air or cable) and satellite video broadcasts. Here, a set-top box generally tunes to a carrier frequency to receive compressed video data for a particular broadcast channel, such as a channel of video provided by a content provider, such as the BBC in the United Kingdom (e.g. BBC1, BBC2, and the BBC World Service are all television channels provided by the BBC). The quickness of channel switching is generally limited by how quickly a tuner can tune to the channel frequency of another television channel (other than the one currently being displayed), demodulate and buffer the compressed video data, and decode the compressed video data into a format suitable for presentation on a display, such as a television. Multiple television channels may be multiplexed together on one carrier frequency, and so tuning and demodulating may not be necessary for channel changes within the multiplexed data.
More recently, television broadcasters have been using IPTV or over-the-top (OTT) technologies to broadcast their channels of video content over the Internet (e.g. the BBC allow users to receive and watch television channels over the Internet using a proprietary BBC iPlayer (RTM) computer program installed on a mobile phone, tablet or smart TV, at more or less the same time as users viewing the channel via terrestrial or satellite means). Additionally, new broadcasters have emerged who exclusively broadcast television programmes using IPTV or OTT. With IPTV and OTT, the quickness of channel switching is generally limited by how quickly a receiver can obtain a stream of compressed video data, buffer the compressed video data, and decode the compressed video data into a format suitable for presentation on a display device.
The compression of digital video data in the context of broadcasting television channels is very important, whether the broadcasting occurs via terrestrial means (over-the-air or cable), satellite means, or over dedicated networks via IPTV or the Internet via OTT. This is because of bandwidth limitations in each delivery mechanism. The most widely used video compression techniques are based on technology standards developed by the MPEG consortium and the International Telecommunications Union (ITU) (including MPEG-2/H.262, H.263, MPEG-4 Part 10/H.264). These video compression techniques are block-based and use intra-frame (spatial) and inter-frame (temporal) prediction to massively reduce the size of raw video data so that the video data might practicably be transmitted in compressed form on terrestrial television broadcast means, by satellite, and over data networks such as the Internet. A typical structure used in inter-frame (temporal) compression techniques is the so-called group of picture (GOP), namely a sequence of compressed frames of various formats, including at least one I frame (i.e., a frame which is encoded and decoded based exclusively on itself and without reference to any other frames), one or more P frames (i.e., frames which are encoded and decoded by using forward prediction from previous frames) and potentially one or more B frames (i.e., frames which are encoded and decoded using bidirectional prediction using both previous frames and future frames). In order to decode a GOP structure the decoder must begin the decoding process of a GOP structure by firstly decoding an I frame. This implies that, when GOP structures are used in a compressed video data stream, the decoder generally must wait for the next I frame in the compressed video data stream before the decoding process can commence. As such, an additional delay in the decoding of a compressed video data stream is created. Further, as a GOP unit may be of variable size, and typically includes a single I frame together with several dependent frames (P and B frames) which are predicted from the I frame or from other frames in the GOP, an unpredictable delay is present in video data compressed using inter-frame prediction. Moreover, since dependent frames are encoded and decoded by using either forward prediction from previous frames (P frames) or by using bidirectional prediction using both previous frames and future frames in the GOP structure (B frames) in general the whole GOP must be received before the frames of the GOP unit can be presented in a reliable manner on a display device.
As already mentioned, channel switching delay is undesirable. Similarly, an unpredictable channel switching delay is also undesirable.
Prior art patent publication US 2011/019813 Al acknowledges the problem with GOP structures in IPTV channel switching. The proposed solution is four-fold. Firstly, an IPTV set top box requesting a channel change is connected to a relatively low-quality picture-in-picture stream of the desired channel. By receiving a relatively low-quality stream, the decoding process is quicker as the bit rate of the video stream is generally smaller and the relatively low-quality video stream is easier to decode than more complex coded video. As a result of the lower bitrate, the video is quicker to buffer. Secondly, the picture-in-picture stream is delivered to the set top box at an enhanced speed, allowing for quicker buffering and sooner decoding. Thirdly, a separate cache is created and used in the IPTV network, close to the set top box, which stores and buffers all relevant IPTV channels at a picture-in-picture quality level so that, upon receiving a signal to switch channels from the set top box, the cached picture-in-picture stream is more quickly made available to the set top box. Fourthly, the cached picture-in-picture stream is deliberately made to begin with an I frame. US '813 claims that the disclosed solution delivers switching time reductions of approximately 0.45 seconds by using the picture-in-picture stream, and approximately 0.9 to 1.4 seconds when combined with boosting the delivery speed of the cached picture-in-picture stream while providing an I frame first (assuming a buffer size of 1 second of video and a GOP size of 0.5 seconds of video - see Fig.4). However, there may still exist an unpredictable delay due to the decoding requirements of B frames, which are predicted from subsequent frames in the stream that must first be received and predicted themselves. Also, the solution of US '813 requires that a full resolution stream eventually replaces the picture-in-picture stream at the set top box, which may cause synchronisation errors and an unsatisfactory jump in the video output to a viewer at switchover.
In conclusion, although the above solutions aim at improving channel switching, there is a need to further improve channel switching. In particular, further reduction of the time taken to switch channels in an IPTV or OTT environment is particularly critical. It is also desirable to have more reliable and consistent channel switching times. A further aim is to reduce or eliminate synchronisation errors.
SUMMARY OF THE INVENTION A first aspect of the invention provides the method recited in the attached claim 1, and further in all of the dependent claims. There is also provided a video receiver as set forth in the claims.
In this way, the switching delay is reduced by decoding and outputting for presentation the first frame of a desired video channel at a level of video production quality lower than a maximum level of video reproduction quality in the hierarchically layered structure and which is less than an expected level of video reproduction quality expected by the receiver for the desired video channel. Video channel switching times are thus more reliable and consistent.
Also, as a direct result of the claimed invention: 1. There is a reduced need to cache all picture-in-picture streams for all available television channels because there is a reduced need to wait for an I frame or full GOP structure at best quality, and so there is a reduced need to cache video so that an I frame is always delivered first. Hardware requirements (e.g. edge routers, caches) are thus reduced and Internet bandwidth requirements are also thus reduced. 2. There is a reduced need for two or more tuners in a terrestrial or satellite set top box to tune, simultaneously, to two or more television channel carrier frequencies or sub-bitstreams in a multiplexed bitstream. 3. There is increased resilience to a changing transmission bit rate for separate television channels, especially when the compressed video data is transmitted over the Internet.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 is a block diagram showing a system for performing an example channel switching method;
Figure 2 is a block diagram showing an example client device having a channel switching capability;
Figure 3 is an abstract example illustration of encoded video data arranged in two separate channels each having video frames and hierarchical layers;
Figure 4 is an abstract example time-line illustration of outputting for presentation a first 4-1* XX w, ZX z"X 4- XX 4-1 l*Xx4- ZX Lx ΖΧΪΧΪΧ ZX I XX 4- XX 1-X-X ΧΧΛΖ1 1-X-X X X 1-X-X I ZXX TZX I Z"X 4- T 71 z-4 ZXZX X-ZXXX l*z"X z-4 1 1 XX 4-1 Z-Xl-X x-11 1 XX I 1 4-1 τ ·
Figure 5 is an abstract example time-line illustration of a first example of switching from a first channel to a second channel;
Figure 6 is an abstract example time-line illustration of a second example of switching from a first channel to a second channel;
Figure 7 is a flow chart outlining a method of channel switching, at a client device of Figure 2, an encoded data stream arranged in hierarchical layers. DETAILED DESCRIPTION OF THE EMBODIMENT(S)
Figure 1 is a block diagram showing a system 10 for performing an example channel switching method. The system 10 comprises a video content server 20 which is arranged to deliver encoded video data 300, itself comprising a first channel 301 of video data and a second channel 302 of video data, via one or more encoded data streams A, B, C. The encoded data streams A, B, C are transmitted to one or more client devices 200, 201, 202 over a data network 40, and additionally or alternatively over a broadcast network 50. Each client device 200, 201, 202 comprises the necessary receiving and data decoding capability to receive and decode an appropriate one or more of the encoded data streams A, B, C.
The data network 40 can be any type of data network suitable for connecting two or more computing devices, such as a local area network or a wide area network, and can include terrestrial wireless and wired connections, and satellite connections. The data network 40 may also be or include telecommunications networks, and in particular telecommunications networks that provide cellular data coverage. In some implementations, the data network 40 would include the Internet or other IP-enabled network, and connections thereto. The data network 40 is differentiated from the broadcast network 50 in that the encoded data streams A, B sent over the data network 40 are provided in accordance with IPTV (Internet Protocol Television) or OTT (over-the-top-technology) methods. The data network 40 is likely used also for data transmission other than video data and associated data.
The broadcast network 50 is a dedicated terrestrial (over-the-air, or cable) or satellite broadcast network configured to transmit digital television channels to client devices, and is dedicated to providing video data and associated data for those digital television channels. An example system would conform to the Digital Video Broadcasting (DVB) standards used mainly in Europe, or an equivalent standardised system such as the Advanced Television Systems Committee (ATSC) digital television standards used chiefly in North America.
The video content server 20 can be any suitable video data storage and delivery server which is able to deliver encoded data to the client devices 200 over data network 40 or broadcast network 50. Video content servers are well known in the art for delivering channels of video data, and for data network 40 the video content server 20 may use unicast and multicast protocols. The video content server 20 is arranged to store the encoded video data 300, comprising the first channel 301 of video data and the second channel of video data 302, and to provide the encoded video data 300 in channels using the one or more encoded data streams A, B, C to the client devices 200, 201, 202. The encoded video data 300 is generated by an encoder (not shown in Figure 1), which may be located on the video content server 20, or elsewhere. The encoder generates the encoded data 300 for broadcast over broadcast network 50, or for on-demand streaming over data network 40, or provides a "live" streaming service over data network 40, where a channel of encoded data 300 is generated in substantially real-time for real-time streaming. While only a single video content server 20 is shown in the example system 10 of Figure 1, it is expected that a plurality of video content servers 20 will be used, and that dedicated video content servers 20 will be used for digital television broadcasting over broadcast network 40 and streaming over data network 50.
The client devices 200, 201, 202 are one of any number of suitable video receiving devices, or streaming clients, and include set-top boxes, digital televisions, smartphones, tablet computers, laptop computers, desktop computers, video conferencing equipment, etc. An example laptop computer 200 is shown connected to video content server 20 via data network 40. Here the laptop computer 200 receives the first channel 301 of encoded video content 300 via first encoded data stream A using either IPTV or OTT protocols. While not shown in Figure 1, the laptop computer 200 may comprise a receiver for use with broadcast network 50 and may receive an encoded data stream or streams from video content server 20 over broadcast network 50 in addition to or instead of encoded data stream A transmitted over data network 40. An example tablet computer or smartphone
201 is shown receiving a second channel 302 of encoded video data over data network 40, via encoded data stream B, using either IPTV or OTT protocols. An example smart TV 202 is shown receiving both first channel 301 of encoded video data and second channel 302 of encoded video data, via multiplexed encoded data stream C, over broadcast network 50 using appropriate DVB, ATSC or equivalent protocols. Additionally, smart TV 202 is shown having an optional connection 42 to data network 40 for receiving additional video or related content using IPTV or OTT protocols, such as additional video channels or an electronic programme guide.
The term channel is used to mean a connection which conveys encoded video data from a source to a destination. The term first or second channel of encoded video data refers to the encoded video data received over that channel, such as over a particular carrier wave at a particular frequency, or via a logical connection over a multiplexed medium. More particularly, but not necessarily, the channel of encoded video data has content from a common source, such as the BBC, or on a common theme. Most particularly, the channel of encoded data represents a television station channel such as BBC1 or BBC2, but may represent a single video or video clip rather than a curated continuous set of television programmes. It is important that the physical or logical connection used to receive the first channel differs to that used to receive the second channel. So, the invention also anticipates channel switches from video content streams at one quality level (such as BBC1) to video content streams at another quality level (such as BBC1 HD).
Figure 2 is a block diagram showing an example client device 200 of the client devices 200, 201, 202 shown in Figure 1. The example client device 200 has a channel switching capability. The example client device 200 comprises a communications port 210, a communications bus 215, a computer processor 220, a buffer 230, a decoder 240, an output 250 and an input 260. The communications bus 215 represents how the various components of the client device 200 are physically and logically connected, and allows for control and data signals to pass between the various components.
The computer processor 220 is configured to control reception of one or more of the encoded data streams A, B, C via the communications port 210 and to extract the correct encoded video data as appropriate, e.g. encoded video data for either the first channel 301 or the second channel 302. The extracted encoded video data is then buffered, or cached, in buffer 230, before being sent to the decoder 240. The decoder 240 is configured to decode the received part of the encoded video data and to pass the decoded video data to the output 250 for presentation to a user of the client device 200.
It is envisaged that an MPEG transport stream is used to deliver the first channel 301 of encoded video data and the second channel 302 of encoded video data as encoded payload data across data network 40 or broadcast network 50, as appropriate. Known methods of reconstructing the encoded payload data are employed.
When a channel change signal is received at input 260, the processor 220 is configured to switch the decoded video data presented to a user on output 250, from for example that derived from the first channel 301 of encoded video data to the second channel 302 of encoded video data. In an example with reference to Figure 1 and laptop computer 200, the processor 220 will communicate with video content server 20 and negotiate to receive a replacement encoded data stream D (not shown) which contains the second channel 302 of encoded video data. In an example with reference to Figure 1 and smart TV 202, the processor 220 will decode the packets of data containing encoded video data for the second channel 302 in the multiplexed data stream C, rather than decode the data packets containing encoded video data for the first channel 301, or vice versa. Alternatively, the smart TV 202 may need to retune to receive a separate multiplexed encoded data stream E (not shown) in order to decode the appropriate packets of data for a user's desired television channel. Regardless, the invention has broad application to all types of television channel transmission or video streaming and reception systems.
The above description is a simplified version of how encoded data streams are received, buffered, decoded and, if appropriate, output, and is provided not as a comprehensive discussion as to how the decoding and display of encoded video data in client devices works, but so as to give sufficient context in which to explain the invention.
Figure 3 is an abstract example illustration of encoded video data 300 arranged to be transmitted in two separate channels 301, 302 of encoded video data.
First channel 301 of encoded video data comprises a plurality of video frames, three of which are shown in Figure 3, namely first video frame 301-1, second video frame 301-2, and third video frame 301-3. Second channel 302 of encoded video data also comprises a plurality of video frames, three of which are shown in Figure 3, namely first video frame 302-1, second video frame 302-2, and third video frame 302-3.
Three video frames are shown per channel of encoded video data for ease of illustration, but in theory there will likely be hundreds or thousands of video frames for each channel of encoded video data, and in most broadcast situations, the video frames of each television channel will be continuously output by the video content server 20 for all or at least a significant part of each day. Each video frame represents a still image of the video data for each channel of encoded video data.
The encoded video data 300 is also arranged in a plurality of hierarchical layers, and in this illustrative example three hierarchical layers, namely a base layer LOQ#0, a first enhancement layer LOQ#1, and a second enhancement layer LOQ#2. Each video frame is encoded in this way, to three levels of video reproduction quality corresponding to each hierarchical layer LOQ#0, LOQ#1, LOQ#2. Fewer or more enhancement layers may be used.
In more detail, each video frame comprises: • a base layer 301-1-0, 302-1-0 in hierarchical layer LOQ#0 which contains encoded video data which when decoded would result in a decoded video frame at a base level of reproduction quality LOQ#0; and • a first enhancement layer 301-1-1 302-1-1 in hierarchical layer LOQ#1 which, together with the corresponding base layer 301-1-0, 302-1-0 contain encoded video data which when decoded would result in a decoded video frame at a first enhanced level of reproduction quality LOQ#1, which is of greater reproduction quality than the base level of reproduction quality LOQ#0. • a further, second, enhancement layer segment 301-1-2, 302-1-2 in hierarchical layer LOQ#2 which, together with the corresponding base layer 301-1-0, 302-1-0 and first enhancement layer 301-1-1, 302-1-1 contain encoded video data which when decoded would result in a decoded video frame at a further second enhanced level of reproduction quality LOQ#2 which is of greater reproduction quality than the first enhanced level of reproduction quality LOQ#1.
As will be apparent to the reader, it is also convenient to name the levels of reproduction quality consistent with the hierarchical layers, so the base level of reproduction quality is named LOQ#0, the first enhancement level of reproduction quality is named LOQ#1, and the second enhancement level of reproduction quality is named LOQ#2.
While every video frame 301-1, 302-1 in the encoded data stream 300 must have a base layer, the video frames do not need to have the same number of enhancement layers. For example, the first video frame 301-1 of the first channel 301 of encoded video data may have only one enhancement layer 301-1-1, while the second video frame 301-2 may have two enhancement layers 301-2-1 and 301-2-2, or more, or no enhancement layers. The same applies to every frame in the encoded video data 300, which may have zero, one, or more enhancement layers. The arrangement of the encoded video data 300 will depend on the precise type of video data, and may be determined by the encoder used to encode a particular part of the encoded video data 300. Also, it is typical for layers of the encoded video data 300 to be encoded to the same level of quality, such as all base layers being encoded to the same base level of reproduction quality LOQ#0, but exceptions may apply depending for example on the constraints of the encoding process. Base layers may differ in level of video reproduction quality between frames, as might enhancement layers.
To illustrate further, example levels of video reproduction quality for each layer, and associated layer bit rates (and hence minimum bandwidths) are shown in Table 1 below.
Table 1
As shown in Table 1, base layers for each video frame are encoded using encoder profile A, which gives a relatively low base resolution of 360p (480x360) at 15 frames per second (fps) at a bit rate of 0.75 Mbit/s. This allows client devices 200 having relatively low bandwidth connections across data network 40 to at least stream the base layers for each frame and produce a viewable output (the bandwidth available must match or exceed the bit rate to enable playback without pauses). The base layers for each frame may be encoded to have lower bit rates, or higher bit rates.
The base layers are encoded using well-known codecs, such as for example the H.264/MPEG-4 AVC family of codecs or derived codecs. A GOP structure may be used.
The first enhancement layers for each frame at LOQ#1 are encoded using encoder profile B, which gives an enhanced resolution of 1080p (1920x1080) at 15 fps at a layer bit rate of 0.75 Mbit/s. The encoded information in the first enhancement layer LOQ#1 is advantageously enhancement information, which is used to enhance the information in the base layer LOQ#0. To use the first enhancement layers, the bandwidth of the connection between the client device 200 and the streaming server 20 over data network 40 must be equal to or greater than the combined bit rates of respective base layers and first enhanced layers, i.e. equal to or greater than 1.5 Mbit/s. This allows client devices 200 having higher bandwidth connections (either instantaneous or average) across data network 40 to at least stream the base layers for each frame and the corresponding first enhancement layers to output video at an improved quality of reproduction. Typically, broadcast network 50 would have sufficient bandwidth.
The second enhancement layers for each frame at LOQ#2 are encoded using encoder profile C, which gives an enhanced resolution of 1080p (1920x1080) at 30 fps at a layer bit rate of 0.5 Mbit/s. The encoded information in the first enhancement layer LOQ#2 is also enhancement information, which is used to enhance the information in the base layer LOQ#0 and first enhancement layer LOQ#1. To use the second enhancement layers, the bandwidth of the connection between the client device 200 and the streaming server 20 over data network 40 must be equal to or greater than the combined bit rates of lower layers and second enhancement layers, i.e. equal to or greater than 2 Mbit/s. This allows client devices 200 having even higher bandwidth connections (either instantaneous or average) across data network 40 to at least stream the base layers, the first enhancement layers, and the second enhancement layers to output video at a further improved quality of reproduction. Typically, broadcast network 50 would have sufficient bandwidth.
Of course, the skilled reader will appreciate that the particular resolution, bit rate, codec used, signal-to-noise ratio, or other encoder profile parameter for a particular layer will be application specific. In general, it may be advantageous to have many enhancement layers, so that a fine increase in quality of reproduction can be achieved for higher available bandwidths, and vice versa, a fine decrease in quality of reproduction for lower available bandwidths.
Advantageously, frames in the base layer LOQ#0 may be encoded with interdependencies on other frames and need not be stand-alone independently decodable frames. For example, where an MPEG codec is used, there may be inter-dependency between frames at the base layer LOQ#0, and one or several group of picture (GOP) structures may be used including I, P and optionally B frames. However, it is useful to have at least one enhancement layer LOQ#1, LOQ#2 where the enhancement information in that layer is independently decodable from other enhancement information for other video frames, or from other enhancement information at higher levels in the hierarchy.
In more detail, once the base layer for a video frame is decoded, first enhancement layer should advantageously be decodable with its corresponding base layer independently from other enhancement layers.
In some situations, it is preferable for the base layer of video frames for a video channel to be decodable independently from the decoding of base layers of other frames in the video channel. Here, using a non-GOP structure further improves switching time.
Figure 4 is an abstract example time-line illustration of outputting for presentation the first frame 301-1 of the first channel 301 of video at a maximum level of video reproduction quality LOQ#2. In Figure 4, time t is generally shown advancing from bottom to top of the page along a curved line t, with time t=0 shown at the bottom of the curved line t, and time t=x shown at the top of the curved line t. The curved line t helps to give a sense of how the invention works as time passes.
As can be seen in Figure 4, the first video frame 301-1 of first channel 301 is decoded and output for presentation at the highest level of reproduction quality LOQ#2. All three layers of encoded data 301-1-0, 301-1-1, 301-1-2 for the first frame 301-1 are used in the decoding process, and base layer 301-1-0 is enhanced by first and second enhancement layers 301-1-1, 301-1-2. This process of receiving, decoding and outputting the first video channel 301 continues on a frame-by-frame basis until a channel switch command is received, or the client device 200 is switched off.
Figure 5 is an abstract example time-line illustration of switching from the first channel 301 to the second channel 302 according to a first example embodiment, when a channel switch command shown by dashed arrow 500 is received at the input 260 of client device 200. The client device 200 obtains encoded video data for the second channel 302.
Upon reception of the base layer 302-1-0 for the first frame 302-1 of the second channel 302, and sufficient other data for the base layer 302-2-0 to be decoded (for example all base layers in a GOP structure) the processor 220 of client device 200 causes the base layer and any other data to be buffered in buffer 230, and then sent to decoder 240 for decoding. As soon as decoding is completed by decoder 240 for the base layer 302-1-0, the decoded video data for the first frame 302-1 of the second channel 302 is sent for presentation to a user by output 250 at the base level of reproduction quality LOQ#0. Here, the base level of reproduction quality is less than that expected by the video receiver, which in this case could be LOQ#1 or LOQ#2. In this way, channel change times are reduced, because as soon as decodable video data is available for the second channel 302 the decodable video data is decoded and output for presentation. The time needed to receive and decode to the base level of reproduction quality LOQ#0 is reduced when compared to the time needed to receive and decode to a higher than base level of reproduction quality, such as LOQ#1 or LOQ#2. A staged-quality channel change can be achieved. Channel switching times are minimised by ensuring the steps of decoding and outputting occur immediately once the decodable layer is received/decoded.
Once the base layer 302-1-0 of the first frame 302-1 is passed to the decoder 240 for decoding to the base level of reproduction quality LOQ#0, the processor 220 is arranged to request, if appropriate, and buffer in buffer 230 the next enhancement layer 302-1-1 for the first frame 302-1. However, there may not be sufficient time remaining to receive, buffer and decode the next enhancement layer 302-1-1 for the first frame 302-1 before the second frame 302-2 of the second channel is due to be presented. In this case, under the control of the processor 220, the client device 200 is configured to receive, buffer and decode the second frame 302-2 in the usual way. In this example, the decoder 240 is able to decode the second frame 302-2 to level of reproduction quality LOQ#1 and so the base layer 302-2-1 and first enhancement layer 302-2-2 are received, decoded and output for presentation in the usual way. In an alternative example not shown, the decoder 240 is able to decode to level of reproduction quality LOQ#2 and so the base layer, first enhancement layer, and second enhancement layer are received, decoded and output for presentation. The decoder 240 may be controlled to decode at LOQ#0 or LOQ#1 before decoding at LOQ#2 and the decoded video data may be progressively output at increasing levels of video reproduction quality. Where there are inter-frame dependencies at a particular layer in the hierarchy, then the necessary additional layers for those frames on which there is a dependency must also be received and decoded before the frame is able to be presented to a user at the appropriate level of reproduction quality.
While channel switching has been disclosed to occur as soon as a base layer of the first frame of the second, desired, channel is decoded and is available for presentation to a user, the system and corresponding method may be implemented whereby a predetermined number of enhancement layers may also need to be decoded so that the level of reproduction quality at switchover may be improved, with additional enhancement layers being used after the initial switchover, either for the first frame in the newly switched on channel, or for a subsequent frame, which may or may not be the immediately subsequent frame. In this way, the initial video reproduction quality is still less than expected for the second channel at the video receiver once reception and decoding the second channel is established, but is higher than base level of video reproduction quality. Examples of where a relatively high level of video reproduction quality may be achievable at switchover are where there exists a high-bandwidth connection between client device 200 and video content server 20, or where the client device 200 is particularly advanced and has relatively high processing and decoding capabilities, which are able to decode to higher levels using more enhancement layers in the time period before the next frame is due to be presented to a user.
Channel switch command 500 is shown occurring before any video output can occur from the first channel 301, which can happen when a user is channel surfing very quickly. However, it is more likely that at least some output from a video channel, such as the first channel 301 in this example, is made successfully before a channel switch command is received.
Figure 6 is an abstract example time-line illustration of switching from the first channel 301 to the second channel 302 according to a second example embodiment, when a channel switch command shown by dashed arrow 600 is received at the input 260 of client device 200.
The channel switch command 600 is received during output of frame 301-1 of the first channel 301. Frame 301-1 is shown being output at the highest level of video reproduction quality available, LOQ#2 which is also the expected level of video reproduction quality for the first channel 301. At the point in time at which the channel switch command 600 is received, the next frame 301-2 is being received and decoded for presentation. However, the channel switch command 600 causes the client device 200 controlled by processor 220 to discard any data received for the next frame 301-2, and instead to obtain encoded video data for the first available frame 302-2 of the second channel 302. As previously discussed, depending on the configuration of the client device 200, which may be based on one or more of available reception bandwidth, decoding capability of the decoder 230, and encoding scheme used by the encoder, the client device may firstly output the frame 302-2 at the base level of reproduction quality LOQ#0, or the first enhancement level of reproduction quality LOQ#1. In other words, to improve the speed of channel switching, the client device does not attempt to decode and output the frame 302-2 at the maximum available level of reproduction quality, which in this example is also the expected level of video reproduction quality for the second channel 302. In this example, the client device 200 is configured to control decoder 230 to decode and output for presentation the frame 302-2 at the first enhancement level of reproduction quality LOQ#1 as soon as the available encoded data 302-2-0, 302-2-1 is received and buffered in buffer 230. Of course, decoding and output could take place as soon as a decodable base layer or other predetermined decodable layer is received.
The client device 200 then obtains the necessary further enhancement layer or layers for frame 302-2 so that the video frame can be output at progressively enhanced or at a maximum, which in this case is the expected, level of reproduction quality LOQ#2. In this case, client device 200, under control of the processor 220, obtains second enhancement layer 302-2-2 for the frame 302-2. Here, there is sufficient time remaining to receive, buffer and decode the second enhancement layer 302-2-2 before the next video frame 302-3 of the second channel 302 is due to be presented, and so the video frame 302-2 is presented to a user at client device 200, via output 250, twice at increasing levels of reproduction quality following the channel switch command 600 to switch from the first channel 301 to the second channel 302.
In other words, the frame may be output at progressively increasing levels of reproduction quality before the second frame is output.
There should then be sufficient time for following video frames 302-3, etc. in the second channel 302 to be decoded and output at a maximum, or expected, level of reproduction quality, depending on factors such as bandwidth availability, etc.
Again, where there are inter-frame dependencies at a particular layer in the hierarchy, then the necessary additional layers for those frames on which there is a dependency must also be received and decoded before the frame is able to be presented to a user at the appropriate level of reproduction quality. For best results, none of the enhancement layers would contain inter-frame dependencies.
Figure 7 is a flow chart outlining a method of channel switching, at a client device 200 of Figure 2 from a current channel to a desired channel. The desired channel is obtained through an encoded data stream arranged in hierarchical layers.
The flow chart is described with reference to the steps shown on Figure 7 in ascending number order as follows. S1000: The client device 200 decodes and outputs a current channel of video data at a best level of reproduction quality. The best level of reproduction quality means the best level of reproduction quality available at the client device 200 at the time of reception of the encoded video data for the current channel. The capture quality of the original raw video, the type of encoder, the encoding ability of the encoder, the bandwidth of the data network 40 or broadcast network 50, the decoder type, the decoding ability of the decoder, and the display screen of the client device 200 all pertain to the final level of reproduction quality. When the client device 200 is outputting video for presentation at a best level of video reproduction quality, then this means the best, or highest, level of video reproduction quality taking into account those factors and this is also sometimes the expected level of video reproduction quality. The expected level of quality may be negotiated between the client device 200 and video content server 20 in advance of the reception of the second channel.
However, as already mentioned, the client device 200 may not yet have output the current channel at all, or may have done so at less than an expected level of reproduction quality. S1010: The client device 200 receives a channel switch signal from input 260. The channel switch signal may originate from a remote control device or from a user interface, such as a keyboard, keypad or touchscreen interface of the client device 200. S1020: The client device 200 obtains encoded video data, including hierarchical layer data, for the desired channel. This may be done by decoding different packets of an existing incoming multiplexed data stream, by re-tuning to an alternative data stream, single or multiplexed, or by requesting a new data stream for the desired channel, or by other known methods. SI030: The client device 200 monitors whether decoder 240 is able to decode the first video frame obtained by the client device 200 for the desired channel at a switching level of quality of reproduction. The switching level of quality is less than the expected level of video reprpduction quality achievable for the video frame assuming that time to output is not constrained. The switching level of video reproduction quality is chosen to balance the need to switch quickly to the desired channel, and to present to a user the best possible level of reproduction quality of video for the desired channel. A base level of video reproduction quality is chosen for quickest switching times, but one or more enhancement layers may be chosen. When the decoder is able to decode the first video frame to the switching level of video reproduction quality, then the method moves to step SI040, otherwise the method continues to obtain the encoded video data for the desired channel. SI040: The decoder 240 decodes and outputs for presentation the video frame layers up to and including the switching level of reproduction quality, which may be the base layer of reproduction quality, or a higher enhancement layer of reproduction quality. The switching level of reproduction quality is a lower level than the expected level of reproduction quality which is determined to be achievable. S1050: Subsequent to the decoding and outputting for presentation of the first frame at the switching level of reproduction quality, the client device 200 obtains higher level enhancement layers for the desired video channel, and then, returning to step SI000, decodes and outputs the desired channel of video data at the expected LOQ, or at the very least, a higher level of reproduction quality than the switching level of reproduction quality. Once the first frame of the desired channel is output at the switching level of reproduction quality, there may be enough time to obtain (further) enhancement layers for the first video frame and to present the first video frame at a higher than switching level of quality. Otherwise, the next video frame of the desired channel is obtained, decoded and output for presentation at the expected level of video reproduction quality or at the very least the switching level of reproduction quality until it becomes possible to increase the level of reproduction quality directly, or in stages, to the expected level of reproduction quality.
The switching delay is reduced by immediately decoding and outputting for presentation at a level of quality lower than the expected or best achievable level of quality in the hierarchically layered structure. Increasing the level of reproduction quality to an expected level can occur smoothly and quickly within a frame or over a number of frames, without disturbances to the video output to a user and without complex synchronisation of multiple video streams. Hardware and processor resources are used more efficiently or are not needed at all. The level of video reproduction quality dynamically adjusts depending on channel conditions using the hierarchal layer structure to adapt accordingly.
Full discussions on how the base layers and enhancement layers are encoded are given in international patent applications published as WO 2013/171173 and WO 2014/170819, both of which are incorporated herein by reference.
Although at least some aspects of the examples described herein with reference to the drawings comprise computer processes performed in processing systems or processors, examples described herein also extend to computer programs, for example computer programs on or in a carrier, adapted for putting the examples into practice. The carrier may be any entity or device capable of carrying the program.
The use of modular structure such as the one depicted in any of the Figures provides also an advantage from an implementation and integration point of view, enabling a simple integration into legacy systems as well as compatibility with legacy systems. By way of example, the channel switching method could be embodied as a plug-in (including libraries and/or source code) to an existing firmware and/or software which already embodies a legacy decoding system (for example one that is already installed in legacy decoders).
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with at least one feature of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (22)

1. A method of switching channels in a video receiver, the method comprising: decoding and outputting for presentation a first channel of encoded video data; and receiving a signal to switch from the first channel of encoded video data to a second channel of encoded video data; characterised by: receiving the second channel of encoded video data, the second channel of encoded video data comprising a plurality of video frames, each video frame encoded in hierarchical layers comprising a base layer and one or more enhancement layers, the base layer being decodable to enable each video frame to be presented at a base level of video reproduction quality, and the one or more enhancement layers, together with the base layer and any preceding enhancement layer or layers, being decodable to enable each video frame to be presented at an increasingly enhanced level of video reproduction quality with each decoded enhancement layer; and upon reception of one or more first decodable layers for a first video frame of the second channel, wherein decoding of the one or more first decodable layers would result in a decoded first video frame at a first level of video reproduction quality less than an expected level of video reproduction quality expected by the video receiver for the plurality of video frames of the second channel: decoding the one or more first decodable layers; and outputting for presentation the first video frame at the first level of video reproduction quality.
2. The method of claim 1, wherein the step of decoding the one or more first decodable layers for the first video frame occurs immediately once the one or more first decodable layers are received.
3. The method of claim 1 or claim 2, wherein the step of outputting for presentation the first video frame at the first level of video reproduction quality occurs immediately once the one or more first decodable layers are decoded.
4. The method of any preceding claim, wherein, after decoding the one or more first decodable layers at the first level of video reproduction quality, and upon reception of an enhancement layer that when decoded would result in a decoded first video frame at an enhanced level of video reproduction quality, decoding the first frame using the enhancement layer, and outputting for presentation the first video frame at the enhanced level of video reproduction quality which is of a higher quality than the first level of video reproduction quality.
5. The method of claim 4, wherein, after decoding the one or more first decodable layers at the enhanced level of video reproduction quality, and upon reception of a further enhancement layer that when decoded would result in a decoded first video frame at a further enhanced level of video reproduction quality, decoding the first frame using the further enhancement layer, and outputting for presentation the first video frame at the further enhanced level of video reproduction quality which is of a higher quality than the enhanced level of video reproduction quality.
6. The method of claim 4 or claim 5, wherein one of the enhanced level of reproduction quality or the further enhanced level of reproduction quality is the level of reproduction quality expected by the video receiver for the plurality of video frames of the second channel.
7. The method of any of claims 4 to 6, wherein the step of outputting for presentation the first video frame at the enhanced level of video reproduction quality or the further enhanced level of reproduction quality occurs after the outputting of the first video frame at the first level of video reproduction quality.
8. The method of any preceding claim, wherein the one or more first decodable layers is the base layer for the first video frame of the second channel.
9. The method of any preceding claim, wherein, upon reception of one or more second decodable layers for a second video frame of the second channel, wherein decoding of the one or more second decodable layers would result in a decoded second video frame at a second level of video reproduction quality: decoding the second layer at the second level of video reproduction quality; and outputting for presentation the second video frame at the second level of video reproduction quality.
10. The method of claim 9, wherein the second level of video reproduction quality is of higher quality than the first level of video reproduction quality.
11. The method of claim 9 or claim 10, wherein, after decoding the one or more second decodable layers at the second level of video reproduction quality, and upon reception of an enhancement layer that when decoded would result in a decoded second video frame at an enhanced level of video reproduction quality with respect to the second level of video reproduction quality, decoding the second video frame using the enhancement layer, and outputting for presentation the second video frame at the enhanced level of video reproduction quality which is of a higher quality than the second level of video reproduction quality.
12. The method of any of claims 9 to 11, wherein one of the second level of reproduction quality or the enhanced level of reproduction quality is the level of reproduction quality expected by the video receiver for the plurality of video frames of the second channel.
13. The method of any preceding claim, wherein the expected level of reproduction quality is determined by the video receiver in advance of reception thereof from a source of the encoded video data received on the second channel.
14. The method of claim 13, wherein the number of enhancement layers required to obtain the expected level of reproduction quality is negotiated by the video receiver in advance of reception thereof with a source of the encoded video data received on the second channel.
15. The method of claim 13 or claim 14, wherein the expected level of reproduction quality is dependent upon a bandwidth available at the video receiver for receiving the second channel.
16. The method of any preceding claim, wherein decoding a base layer for the first frame is dependent on the decoding of other base layers of other video frames of the second channel, and the one or more first decodable layers comprises the other base layers.
17. The method of claim 16, wherein the base layer for the first frame is part of a group of pictures (GOP) structure.
18. The method of any of claims 1 to 15, wherein decoding the base layer for the first frame is independent on the decoding of other base layers of other video frames of the second channel.
19. The method of any preceding claim, wherein the decoding of at least one or more enhancement layers for the first frame is independent from the decoding of other enhancement layers of other video frames of the second channel.
20. The method of any preceding claim, wherein the step of receiving the second channel of encoded video data comprises receiving the second channel of encoded video data over at least one of the following: a terrestrial broadcast link, such as an over-the-air or cable link; a satellite broadcast link; an internet protocol television (IPTV) connection; and an over-the-top- technology (OTT) connection.
21. The method of any preceding claim, wherein the level of reproduction quality is or is related to one or more of the following: resolution; and compression.
22. A video receiver comprising: a processor configured to control reception, buffering and decoding of two or more incoming channels of encoded video data; a communications port configured to communicate with each source the of incoming channels of encoded video data; a decoder configured to decode and output for presentation each of the two or more incoming channels of encoded video data; wherein the processor is configured: to cause the decoder to decode and output for presentation a first channel of encoded video data; and receive a signal to switch from the first channel of encoded video data to a second channel of encoded video data; and receive the second channel of encoded video data; characterised in that: the second channel of encoded video data comprises a plurality of video frames, each video frame encoded in hierarchical layers comprising a base layer and one or more enhancement layers, the base layer being decodable to enable each video frame to be presented at a base level of video reproduction quality, and the one or more enhancement layers, together with the base layer and any preceding enhancement layer or layers, being decodable to enable each video frame to be presented at an increasingly enhanced level of video reproduction quality with each decoded enhancement layer; and the decoder is configured to: upon reception of a first decodable layer for a first video frame of the second channel, wherein decoding of the one or more first decodable layers would result in a decoded first video frame at a first level of video reproduction quality less than an expected level of video reproduction quality expected by the video receiver for the plurality of video frames of the second channel: decoding the one or more first decodable layers at the first level of video reproduction quality; and outputting for presentation the first video frame at the first level of video reproduction quality.
GB1707373.5A 2017-05-08 2017-05-08 Channel switching Active GB2562243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1707373.5A GB2562243B (en) 2017-05-08 2017-05-08 Channel switching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1707373.5A GB2562243B (en) 2017-05-08 2017-05-08 Channel switching

Publications (3)

Publication Number Publication Date
GB201707373D0 GB201707373D0 (en) 2017-06-21
GB2562243A true GB2562243A (en) 2018-11-14
GB2562243B GB2562243B (en) 2022-02-09

Family

ID=59065702

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1707373.5A Active GB2562243B (en) 2017-05-08 2017-05-08 Channel switching

Country Status (1)

Country Link
GB (1) GB2562243B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010054719A1 (en) * 2008-11-12 2010-05-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Reducing a tune-in delay into a scalable encoded data stream
WO2010140867A2 (en) * 2009-06-05 2010-12-09 한국전자통신연구원 Streaming server and mobile terminal for reducing channel-changing delay, and a method therefor
US20130198403A1 (en) * 2012-02-01 2013-08-01 Eldon Technology Limited Remote viewing of media content using layered video encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010054719A1 (en) * 2008-11-12 2010-05-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Reducing a tune-in delay into a scalable encoded data stream
WO2010140867A2 (en) * 2009-06-05 2010-12-09 한국전자통신연구원 Streaming server and mobile terminal for reducing channel-changing delay, and a method therefor
US20130198403A1 (en) * 2012-02-01 2013-08-01 Eldon Technology Limited Remote viewing of media content using layered video encoding

Also Published As

Publication number Publication date
GB2562243B (en) 2022-02-09
GB201707373D0 (en) 2017-06-21

Similar Documents

Publication Publication Date Title
US8135040B2 (en) Accelerated channel change
US20110109808A1 (en) Method and apparatus for fast channel change using a secondary channel video stream
US8341672B2 (en) Systems, methods and computer readable media for instant multi-channel video content browsing in digital video distribution systems
US7644425B2 (en) Picture-in-picture mosaic
US20110109810A1 (en) Method an apparatus for fast channel change using a scalable video coding (svc) stream
US20090109988A1 (en) Video Decoder with an Adjustable Video Clock
US8571027B2 (en) System and method for multi-rate video delivery using multicast stream
WO2014088718A1 (en) Broadcast transition channel
KR102361314B1 (en) Method and apparatus for providing 360 degree virtual reality broadcasting services
US20110110418A1 (en) Scalable video coding method for fast channel change to increase coding efficiency
US20160337671A1 (en) Method and apparatus for multiplexing layered coded contents
CN111800662A (en) System and method for fast channel change
US20120008053A1 (en) Method and system for fast channel change between programs utilizing a single decoder to concurrently decode multiple programs
GB2562243A (en) Channel switching
US20100246685A1 (en) Compressed video decoding delay reducer
US7984477B2 (en) Real-time video compression
JP6501503B2 (en) Electronic device and signal processing method
US20140245361A1 (en) Multilevel satellite broadcasting system for providing hierarchical satellite broadcasting and operation method of the same
Amreev et al. CHOOSING A COMPRESSION STANDARD FOR TRANSMITTING A TELEVISION IMAGE
JP2016116069A (en) Image encoder, image decoder, image encoding method, image decoding method, and image encoding/decoding system
KR100994053B1 (en) System and Tuning Method for Internet Protocol TV Broadcasting Service, IPTV Set-Top Box
KR20130017404A (en) Apparatus and method for reducing zapping delay using hybrid multimedia service
JP2016100831A (en) Image encoder and image encode method

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: V-NOVA INTERNATIONAL LIMITED

Free format text: FORMER OWNER: V-NOVA LTD